Sample records for tomographic inverse problems

  1. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, Geoffrey C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  2. The role of simulated small-scale ocean variability in inverse computations for ocean acoustic tomography.

    PubMed

    Dushaw, Brian D; Sagen, Hanne

    2017-12-01

    Ocean acoustic tomography depends on a suitable reference ocean environment with which to set the basic parameters of the inverse problem. Some inverse problems may require a reference ocean that includes the small-scale variations from internal waves, small mesoscale, or spice. Tomographic inversions that employ data of stable shadow zone arrivals, such as those that have been observed in the North Pacific and Canary Basin, are an example. Estimating temperature from the unique acoustic data that have been obtained in Fram Strait is another example. The addition of small-scale variability to augment a smooth reference ocean is essential to understanding the acoustic forward problem in these cases. Rather than a hindrance, the stochastic influences of the small scale can be exploited to obtain accurate inverse estimates. Inverse solutions are readily obtained, and they give computed arrival patterns that matched the observations. The approach is not ad hoc, but universal, and it has allowed inverse estimates for ocean temperature variations in Fram Strait to be readily computed on several acoustic paths for which tomographic data were obtained.

  3. Full-wave Moment Tensor and Tomographic Inversions Based on 3D Strain Green Tensor

    DTIC Science & Technology

    2010-01-31

    propagation in three-dimensional (3D) earth, linearizes the inverse problem by iteratively updating the earth model , and provides an accurate way to...self-consistent FD-SGT databases constructed from finite-difference simulations of wave propagation in full-wave tomographic models can be used to...determine the moment tensors within minutes after a seismic event, making it possible for real time monitoring using 3D models . 15. SUBJECT TERMS

  4. Singular value decomposition for the truncated Hilbert transform

    NASA Astrophysics Data System (ADS)

    Katsevich, A.

    2010-11-01

    Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.

  5. Active and Passive Hydrologic Tomographic Surveys:A Revolution in Hydrology (Invited)

    NASA Astrophysics Data System (ADS)

    Yeh, T. J.

    2013-12-01

    Mathematical forward or inverse problems of flow through geological media always have unique solutions if necessary conditions are givens. Unique mathematical solutions to forward or inverse modeling of field problems are however always uncertain (an infinite number of possibilities) due to many reasons. They include non-representativeness of the governing equations, inaccurate necessary conditions, multi-scale heterogeneity, scale discrepancies between observation and model, noise and others. Conditional stochastic approaches, which derives the unbiased solution and quantifies the solution uncertainty, are therefore most appropriate for forward and inverse modeling of hydrological processes. Conditioning using non-redundant data sets reduces uncertainty. In this presentation, we explain non-redundant data sets in cross-hole aquifer tests, and demonstrate that active hydraulic tomographic survey (using man-made excitations) is a cost-effective approach to collect the same type but non-redundant data sets for reducing uncertainty in the inverse modeling. We subsequently show that including flux measurements (a piece of non-redundant data set) collected in the same well setup as in hydraulic tomography improves the estimated hydraulic conductivity field. We finally conclude with examples and propositions regarding how to collect and analyze data intelligently by exploiting natural recurrent events (river stage fluctuations, earthquakes, lightning, etc.) as energy sources for basin-scale passive tomographic surveys. The development of information fusion technologies that integrate traditional point measurements and active/passive hydrogeophysical tomographic surveys, as well as advances in sensor, computing, and information technologies may ultimately advance our capability of characterizing groundwater basins to achieve resolution far beyond the feat of current science and technology.

  6. The analysis of a rocket tomography measurement of the N2+3914A emission and N2 ionization rates in an auroral arc

    NASA Technical Reports Server (NTRS)

    Mcdade, Ian C.

    1991-01-01

    Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.

  7. Automatic alignment for three-dimensional tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.

    2018-02-01

    In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.

  8. Singular value decomposition: a diagnostic tool for ill-posed inverse problems in optical computed tomography

    NASA Astrophysics Data System (ADS)

    Lanen, Theo A.; Watt, David W.

    1995-10-01

    Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.

  9. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  10. Tomographic Neutron Imaging using SIRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregor, Jens; FINNEY, Charles E A; Toops, Todd J

    2013-01-01

    Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.

  11. Tomographic phase microscopy: principles and applications in bioimaging [Invited

    PubMed Central

    Jin, Di; Zhou, Renjie; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    Tomographic phase microscopy (TPM) is an emerging optical microscopic technique for bioimaging. TPM uses digital holographic measurements of complex scattered fields to reconstruct three-dimensional refractive index (RI) maps of cells with diffraction-limited resolution by solving inverse scattering problems. In this paper, we review the developments of TPM from the fundamental physics to its applications in bioimaging. We first provide a comprehensive description of the tomographic reconstruction physical models used in TPM. The RI map reconstruction algorithms and various regularization methods are discussed. Selected TPM applications for cellular imaging, particularly in hematology, are reviewed. Finally, we examine the limitations of current TPM systems, propose future solutions, and envision promising directions in biomedical research. PMID:29386746

  12. Downscaling Smooth Tomographic Models: Separating Intrinsic and Apparent Anisotropy

    NASA Astrophysics Data System (ADS)

    Bodin, Thomas; Capdeville, Yann; Romanowicz, Barbara

    2016-04-01

    In recent years, a number of tomographic models based on full waveform inversion have been published. Due to computational constraints, the fitted waveforms are low pass filtered, which results in an inability to map features smaller than half the shortest wavelength. However, these tomographic images are not a simple spatial average of the true model, but rather an effective, apparent, or equivalent model that provides a similar 'long-wave' data fit. For example, it can be shown that a series of horizontal isotropic layers will be seen by a 'long wave' as a smooth anisotropic medium. In this way, the observed anisotropy in tomographic models is a combination of intrinsic anisotropy produced by lattice-preferred orientation (LPO) of minerals, and apparent anisotropy resulting from the incapacity of mapping discontinuities. Interpretations of observed anisotropy (e.g. in terms of mantle flow) requires therefore the separation of its intrinsic and apparent components. The "up-scaling" relations that link elastic properties of a rapidly varying medium to elastic properties of the effective medium as seen by long waves are strongly non-linear and their inverse highly non-unique. That is, a smooth homogenized effective model is equivalent to a large number of models with discontinuities. In the 1D case, Capdeville et al (GJI, 2013) recently showed that a tomographic model which results from the inversion of low pass filtered waveforms is an homogenized model, i.e. the same as the model computed by upscaling the true model. Here we propose a stochastic method to sample the ensemble of layered models equivalent to a given tomographic profile. We use a transdimensional formulation where the number of layers is variable. Furthermore, each layer may be either isotropic (1 parameter) or intrinsically anisotropic (2 parameters). The parsimonious character of the Bayesian inversion gives preference to models with the least number of parameters (i.e. least number of layers, and maximum number of isotropic layers). The non-uniqueness of the problem can be addressed by adding high frequency data such as receiver functions, able to map first order discontinuities. We show with synthetic tests that this method enables us to distinguish between intrinsic and apparent anisotropy in tomographic models, as layers with intrinsic anisotropy are only present when required by the data. A real data example is presented based on the latest global model produced at Berkeley.

  13. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pablant, N. A.; Bell, R. E.; Bitter, M.

    2014-11-15

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  14. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaksa)

    DOE PAGES

    Pablant, N. A.; Bell, R. E.; Bitter, M.; ...

    2014-08-08

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  15. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  16. Direct integration of the inverse Radon equation for X-ray computed tomography.

    PubMed

    Libin, E E; Chakhlov, S V; Trinca, D

    2016-11-22

    A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.

  17. Tomographic inversion of satellite photometry. II

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Hays, P. B.; Abreu, V. J.

    1985-01-01

    A method for combining nadir observations of emission features in the upper atmosphere with the result of a tomographic inversion of limb brightness measurements is presented. Simulated and actual results are provided, and error sensitivity is investigated.

  18. Finite frequency shear wave splitting tomography: a model space search approach

    NASA Astrophysics Data System (ADS)

    Mondal, P.; Long, M. D.

    2017-12-01

    Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.

  19. Simultaneous elastic parameter inversion in 2-D/3-D TTI medium combined later arrival times

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; Wang, Tao; Yang, Shang-bei; Li, Xing-wang; Huang, Guo-jiao

    2016-04-01

    Traditional traveltime inversion for anisotropic medium is, in general, based on a "weak" assumption in the anisotropic property, which simplifies both the forward part (ray tracing is performed once only) and the inversion part (a linear inversion solver is possible). But for some real applications, a general (both "weak" and "strong") anisotropic medium should be considered. In such cases, one has to develop a ray tracing algorithm to handle with the general (including "strong") anisotropic medium and also to design a non-linear inversion solver for later tomography. Meanwhile, it is constructive to investigate how much the tomographic resolution can be improved by introducing the later arrivals. For this motivation, we incorporated our newly developed ray tracing algorithm (multistage irregular shortest-path method) for general anisotropic media with a non-linear inversion solver (a damped minimum norm, constrained least squares problem with a conjugate gradient approach) to formulate a non-linear inversion solver for anisotropic medium. This anisotropic traveltime inversion procedure is able to combine the later (reflected) arrival times. Both 2-D/3-D synthetic inversion experiments and comparison tests show that (1) the proposed anisotropic traveltime inversion scheme is able to recover the high contrast anomalies and (2) it is possible to improve the tomographic resolution by introducing the later (reflected) arrivals, but not as expected in the isotropic medium, because the different velocity (qP, qSV and qSH) sensitivities (or derivatives) respective to the different elastic parameters are not the same but are also dependent on the inclination angle.

  20. Solving ill-posed inverse problems using iterative deep neural networks

    NASA Astrophysics Data System (ADS)

    Adler, Jonas; Öktem, Ozan

    2017-12-01

    We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).

  1. Optimization-Based Approach for Joint X-Ray Fluorescence and Transmission Tomographic Inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Zichao; Leyffer, Sven; Wild, Stefan M.

    2016-01-01

    Fluorescence tomographic reconstruction, based on the detection of photons coming from fluorescent emission, can be used for revealing the internal elemental composition of a sample. On the other hand, conventional X-ray transmission tomography can be used for reconstructing the spatial distribution of the absorption coefficient inside a sample. In this work, we integrate both X-ray fluorescence and X-ray transmission data modalities and formulate a nonlinear optimization-based approach for reconstruction of the elemental composition of a given object. This model provides a simultaneous reconstruction of both the quantitative spatial distribution of all elements and the absorption effect in the sample. Mathematicallymore » speaking, we show that compared with the single-modality inversion (i.e., the X-ray transmission or fluorescence alone), the joint inversion provides a better-posed problem, which implies a better recovery. Therefore, the challenges in X-ray fluorescence tomography arising mainly from the effects of self-absorption in the sample are partially mitigated. The use of this technique is demonstrated on the reconstruction of several synthetic samples.« less

  2. A Methodology to Seperate and Analyze a Seismic Wide Angle Profile

    NASA Astrophysics Data System (ADS)

    Weinzierl, Wolfgang; Kopp, Heidrun

    2010-05-01

    General solutions of inverse problems can often be obtained through the introduction of probability distributions to sample the model space. We present a simple approach of defining an a priori space in a tomographic study and retrieve the velocity-depth posterior distribution by a Monte Carlo method. Utilizing a fitting routine designed for very low statistics to setup and analyze the obtained tomography results, it is possible to statistically separate the velocity-depth model space derived from the inversion of seismic refraction data. An example of a profile acquired in the Lesser Antilles subduction zone reveals the effectiveness of this approach. The resolution analysis of the structural heterogeneity includes a divergence analysis which proves to be capable of dissecting long wide-angle profiles for deep crust and upper mantle studies. The complete information of any parameterised physical system is contained in the a posteriori distribution. Methods for analyzing and displaying key properties of the a posteriori distributions of highly nonlinear inverse problems are therefore essential in the scope of any interpretation. From this study we infer several conclusions concerning the interpretation of the tomographic approach. By calculating a global as well as singular misfits of velocities we are able to map different geological units along a profile. Comparing velocity distributions with the result of a tomographic inversion along the profile we can mimic the subsurface structures in their extent and composition. The possibility of gaining a priori information for seismic refraction analysis by a simple solution to an inverse problem and subsequent resolution of structural heterogeneities through a divergence analysis is a new and simple way of defining a priori space and estimating the a posteriori mean and covariance in singular and general form. The major advantage of a Monte Carlo based approach in our case study is the obtained knowledge of velocity depth distributions. Certainly the decision of where to extract velocity information on the profile for setting up a Monte Carlo ensemble is limiting the a priori space. However, the general conclusion of analyzing the velocity field according to distinct reference distributions gives us the possibility to define the covariance according to any geological unit if we have a priori information on the velocity depth distributions. Using the wide angle data recorded across the Lesser Antilles arc, we are able to resolve a shallow feature like the backstop by a robust and simple divergence analysis. We demonstrate the effectiveness of the new methodology to extract some key features and properties from the inversion results by including information concerning the confidence level of results.

  3. Ambient Noise Interferometry and Surface Wave Array Tomography: Promises and Problems

    NASA Astrophysics Data System (ADS)

    van der Hilst, R. D.; Yao, H.; de Hoop, M. V.; Campman, X.; Solna, K.

    2008-12-01

    In the late 1990ies most seismologists would have frowned at the possibility of doing high-resolution surface wave tomography with noise instead of with signal associated with ballistic source-receiver propagation. Some may still do, but surface wave tomography with Green's functions estimated through ambient noise interferometry ('sourceless tomography') has transformed from a curiosity into one of the (almost) standard tools for analysis of data from dense seismograph arrays. Indeed, spectacular applications of ambient noise surface wave tomography have recently been published. For example, application to data from arrays in SE Tibet revealed structures in the crust beneath the Tibetan plateau that could not be resolved by traditional tomography (Yao et al., GJI, 2006, 2008). While the approach is conceptually simple, in application the proverbial devil is in the detail. Full reconstruction of the Green's function requires that the wavefields used are diffusive and that ambient noise energy is evenly distributed in the spatial dimensions of interest. In the field, these conditions are not usually met, and (frequency dependent) non-uniformity of the noise sources may lead to incomplete reconstruction of the Green's function. Furthermore, ambient noise distributions can be time-dependent, and seasonal variations have been documented. Naive use of empirical Green's functions may produce (unknown) bias in the tomographic models. The degrading effect on EGFs of the directionality of noise distribution forms particular challenges for applications beyond isotropic surface wave inversions, such as inversions for (azimuthal) anisotropy and attempts to use higher modes (or body waves). Incomplete Green's function reconstruction can (probably) not be prevented, but it may be possible to reduce the problem and - at least - understand the degree of incomplete reconstruction and prevent it from degrading the tomographic model. We will present examples of Rayleigh wave inversions and discuss strategies to mitigate effects of incomplete Green's function reconstruction on tomographic images.

  4. Entropy-Bayesian Inversion of Time-Lapse Tomographic GPR data for Monitoring Dielectric Permittivity and Soil Moisture Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Z; Terry, N; Hubbard, S S

    2013-02-12

    In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less

  5. Entropy-Bayesian Inversion of Time-Lapse Tomographic GPR data for Monitoring Dielectric Permittivity and Soil Moisture Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.

    2013-02-22

    In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less

  6. Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization

    NASA Astrophysics Data System (ADS)

    Liu, Chuanming; Yao, Huajian

    2017-03-01

    Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.

  7. Evaluation of reconstruction errors and identification of artefacts for JET gamma and neutron tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk; Tiseanu, Ion; Zoita, Vasile

    The Joint European Torus (JET) neutron profile monitor ensures 2D coverage of the gamma and neutron emissive region that enables tomographic reconstruction. Due to the availability of only two projection angles and to the coarse sampling, tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET, but the problem of evaluating the errors associated with the reconstructed emissivity profile is still open. The reconstruction technique based on the maximum likelihood principle, that proved already to be a powerful tool for JET tomography, has been usedmore » to develop a method for the numerical evaluation of the statistical properties of the uncertainties in gamma and neutron emissivity reconstructions. The image covariance calculation takes into account the additional techniques introduced in the reconstruction process for tackling with the limited data set (projection resampling, smoothness regularization depending on magnetic field). The method has been validated by numerically simulations and applied to JET data. Different sources of artefacts that may significantly influence the quality of reconstructions and the accuracy of variance calculation have been identified.« less

  8. DIRECT OBSERVATION OF SOLAR CORONAL MAGNETIC FIELDS BY VECTOR TOMOGRAPHY OF THE CORONAL EMISSION LINE POLARIZATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramar, M.; Lin, H.; Tomczyk, S., E-mail: kramar@cua.edu, E-mail: lin@ifa.hawaii.edu, E-mail: tomczyk@ucar.edu

    We present the first direct “observation” of the global-scale, 3D coronal magnetic fields of Carrington Rotation (CR) Cycle 2112 using vector tomographic inversion techniques. The vector tomographic inversion uses measurements of the Fe xiii 10747 Å Hanle effect polarization signals by the Coronal Multichannel Polarimeter (CoMP) and 3D coronal density and temperature derived from scalar tomographic inversion of Solar Terrestrial Relations Observatory (STEREO)/Extreme Ultraviolet Imager (EUVI) coronal emission lines (CELs) intensity images as inputs to derive a coronal magnetic field model that best reproduces the observed polarization signals. While independent verifications of the vector tomography results cannot be performed, wemore » compared the tomography inverted coronal magnetic fields with those constructed by magnetohydrodynamic (MHD) simulations based on observed photospheric magnetic fields of CR 2112 and 2113. We found that the MHD model for CR 2112 is qualitatively consistent with the tomography inverted result for most of the reconstruction domain except for several regions. Particularly, for one of the most noticeable regions, we found that the MHD simulation for CR 2113 predicted a model that more closely resembles the vector tomography inverted magnetic fields. In another case, our tomographic reconstruction predicted an open magnetic field at a region where a coronal hole can be seen directly from a STEREO-B/EUVI image. We discuss the utilities and limitations of the tomographic inversion technique, and present ideas for future developments.« less

  9. Applications of Electrical Impedance Tomography (EIT): A Short Review

    NASA Astrophysics Data System (ADS)

    Kanti Bera, Tushar

    2018-03-01

    Electrical Impedance Tomography (EIT) is a tomographic imaging method which solves an ill posed inverse problem using the boundary voltage-current data collected from the surface of the object under test. Though the spatial resolution is comparatively low compared to conventional tomographic imaging modalities, due to several advantages EIT has been studied for a number of applications such as medical imaging, material engineering, civil engineering, biotechnology, chemical engineering, MEMS and other fields of engineering and applied sciences. In this paper, the applications of EIT have been reviewed and presented as a short summary. The working principal, instrumentation and advantages are briefly discussed followed by a detail discussion on the applications of EIT technology in different areas of engineering, technology and applied sciences.

  10. Advanced Ultrasonic Tomograph of Children's Bones

    NASA Astrophysics Data System (ADS)

    Lasaygues, Philippe; Lefebvre, Jean-Pierre; Guillermin, Régine; Kaftandjian, Valérie; Berteau, Jean-Philippe; Pithioux, Martine; Petit, Philippe

    This study deals with the development of an experimental device for performing ultrasonic computed tomography (UCT) on bone in pediatric degrees. The children's bone tomographs obtained in this study, were based on the use of a multiplexed 2-D ring antenna (1 MHz and 3 MHz) designed for performing electronic and mechanical scanning. Although this approach is known to be a potentially valuable means of imaging objects with similar acoustical impedances, problems arise when quantitative images of more highly contrasted media such as bones are required. Various strategies and various mathematical procedures for modeling the wave propagation based on Born approximations have been developed at our laboratory, which are suitable for use with pediatric cases. Inversions of the experimental data obtained are presented.

  11. Tomography and the Herglotz-Wiechert inverse formulation

    NASA Astrophysics Data System (ADS)

    Nowack, Robert L.

    1990-04-01

    In this paper, linearized tomography and the Herglotz-Wiechert inverse formulation are compared. Tomographic inversions for 2-D or 3-D velocity structure use line integrals along rays and can be written in terms of Radon transforms. For radially concentric structures, Radon transforms are shown to reduce to Abel transforms. Therefore, for straight ray paths, the Abel transform of travel-time is a tomographic algorithm specialized to a one-dimensional radially concentric medium. The Herglotz-Wiechert formulation uses seismic travel-time data to invert for one-dimensional earth structure and is derived using exact ray trajectories by applying an Abel transform. This is of historical interest since it would imply that a specialized tomographic-like algorithm has been used in seismology since the early part of the century (see Herglotz, 1907; Wiechert, 1910). Numerical examples are performed comparing the Herglotz-Wiechert algorithm and linearized tomography along straight rays. Since the Herglotz-Wiechert algorithm is applicable under specific conditions, (the absence of low velocity zones) to non-straight ray paths, the association with tomography may prove to be useful in assessing the uniqueness of tomographic results generalized to curved ray geometries.

  12. Maximum likelihood bolometric tomography for the determination of the uncertainties in the radiation emission on JET TOKAMAK

    NASA Astrophysics Data System (ADS)

    Craciunescu, Teddy; Peluso, Emmanuele; Murari, Andrea; Gelfusa, Michela; JET Contributors

    2018-05-01

    The total emission of radiation is a crucial quantity to calculate the power balances and to understand the physics of any Tokamak. Bolometric systems are the main tool to measure this important physical quantity through quite sophisticated tomographic inversion methods. On the Joint European Torus, the coverage of the bolometric diagnostic, due to the availability of basically only two projection angles, is quite limited, rendering the inversion a very ill-posed mathematical problem. A new approach, based on the maximum likelihood, has therefore been developed and implemented to alleviate one of the major weaknesses of traditional tomographic techniques: the difficulty to determine routinely the confidence intervals in the results. The method has been validated by numerical simulations with phantoms to assess the quality of the results and to optimise the configuration of the parameters for the main types of emissivity encountered experimentally. The typical levels of statistical errors, which may significantly influence the quality of the reconstructions, have been identified. The systematic tests with phantoms indicate that the errors in the reconstructions are quite limited and their effect on the total radiated power remains well below 10%. A comparison with other approaches to the inversion and to the regularization has also been performed.

  13. Spectral-element simulations of wave propagation in complex exploration-industry models: Imaging and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.

  14. A multiresolution inversion for imaging the ionosphere

    NASA Astrophysics Data System (ADS)

    Yin, Ping; Zheng, Ya-Nan; Mitchell, Cathryn N.; Li, Bo

    2017-06-01

    Ionospheric tomography has been widely employed in imaging the large-scale ionospheric structures at both quiet and storm times. However, the tomographic algorithms to date have not been very effective in imaging of medium- and small-scale ionospheric structures due to limitations of uneven ground-based data distributions and the algorithm itself. Further, the effect of the density and quantity of Global Navigation Satellite Systems data that could help improve the tomographic results for the certain algorithm remains unclear in much of the literature. In this paper, a new multipass tomographic algorithm is proposed to conduct the inversion using intensive ground GPS observation data and is demonstrated over the U.S. West Coast during the period of 16-18 March 2015 which includes an ionospheric storm period. The characteristics of the multipass inversion algorithm are analyzed by comparing tomographic results with independent ionosonde data and Center for Orbit Determination in Europe total electron content estimates. Then, several ground data sets with different data distributions are grouped from the same data source in order to investigate the impact of the density of ground stations on ionospheric tomography results. Finally, it is concluded that the multipass inversion approach offers an improvement. The ground data density can affect tomographic results but only offers improvements up to a density of around one receiver every 150 to 200 km. When only GPS satellites are tracked there is no clear advantage in increasing the density of receivers beyond this level, although this may change if multiple constellations are monitored from each receiving station in the future.

  15. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    PubMed Central

    Rahim, Ruzairi Abdul; Chen, Leong Lai; San, Chan Kok; Rahiman, Mohd Hafiz Fazalul; Fea, Pang Jon

    2009-01-01

    This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image. PMID:22291523

  16. Tomographic Processing of Synthetic Aperture Radar Signals for Enhanced Resolution

    DTIC Science & Technology

    1989-11-01

    to image 3 larger scenes, this problem becomes more important. A byproduct of this investigation is a duality theorem which is a generalization of the...well-known Projection-Slice Theorem . The second prob- - lem proposed is that of imaging a rapidly-spinning object, for example in inverse SAR mode...slices is absent. There is a possible connection of the word to the Projection-Slice Theorem , but, as seen in Chapter 4, even this is absent in the

  17. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  18. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  19. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE PAGES

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...

    2012-05-01

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  20. Tomographic inversion of satellite photometry

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Hays, P. B.; Abreu, V. J.

    1984-01-01

    An inversion algorithm capable of reconstructing the volume emission rate of thermospheric airglow features from satellite photometry has been developed. The accuracy and resolution of this technique are investigated using simulated data, and the inversions of several sets of observations taken by the Visible Airglow Experiment are presented.

  1. 2D first break tomographic processing of data measured for celebration profiles: CEL01, CEL04, CEL05, CEL06, CEL09, CEL11

    NASA Astrophysics Data System (ADS)

    Bielik, M.; Vozar, J.; Hegedus, E.; Celebration Working Group

    2003-04-01

    The contribution informs about the preliminary results that relate to the first arrival p-wave seismic tomographic processing of data measured along the profiles CEL01, CEL04, CEL05, CEL06, CEL09 and CEL11. These profiles were measured in a framework of the seismic project called CELEBRATION 2000. Data acquisition and geometric parameters of the processed profiles, tomographic processing’s principle, particular processing steps and program parameters are described. Characteristic data (shot points, geophone points, total length of profiles, for all profiles, sampling, sensors and record lengths) of observation profiles are given. The fast program package developed by C. Zelt was applied for tomographic velocity inversion. This process consists of several steps. First step is a creation of the starting velocity field for which the calculated arrival times are modelled by the method of finite differences. The next step is minimization of differences between the measured and modelled arrival time till the deviation is small. Elimination of equivalency problem by including a priori information in the starting velocity field was done too. A priori information consists of the depth to the pre-Tertiary basement, estimation of its overlying sedimentary velocity from well-logging and or other seismic velocity data, etc. After checking the reciprocal times, pickings were corrected. The final result of the processing is a reliable travel time curve set considering the reciprocal times. We carried out picking of travel time curves, enhancement of signal-to-noise ratio on the seismograms using the program system of PROMAX. Tomographic inversion was carried out by so called 3D/2D procedure taking into account 3D wave propagation. It means that a corridor along the profile, which contains the outlying shot points and geophone points as well was defined and we carried out 3D processing within this corridor. The preliminary results indicate the seismic anomalous zones within the crust and the uppermost part of the upper mantle in the area consists of the Western Carpathians, the North European platform, the Pannonian basin and the Bohemian Massif.

  2. Surface wave tomography applied to the North American upper mantle

    NASA Astrophysics Data System (ADS)

    van der Lee, Suzan; Frederiksen, Andrew

    Tomographic techniques that invert seismic surface waves for 3-D Earth structure differ in their definitions of data and the forward problem as well as in the parameterization of the tomographic model. However, all such techniques have in common that the tomographic inverse problem involves solving a large and mixed-determined set of linear equations. Consequently these inverse problems have multiple solutions and inherently undefinable accuracy. Smoother and rougher tomographic models are found with rougher (confined to great circle path) and smoother (finite-width) sensitivity kernels, respectively. A powerful, well-tested method of surface wave tomography (Partitioned Waveform Inversion) is based on inverting the waveforms of wave trains comprising regional S and surface waves from at least hundreds of seismograms for 3-D variations in S wave velocity. We apply this method to nearly 1400 seismograms recorded by digital broadband seismic stations in North America. The new 3-D S-velocity model, NA04, is consistent with previous findings that are based on separate, overlapping data sets. The merging of US and Canadian data sets, adding Canadian recordings of Mexican earthquakes, and combining fundamental-mode with higher-mode waveforms provides superior resolution, in particular in the US-Canada border region and the deep upper mantle. NA04 shows that 1) the Atlantic upper mantle is seismically faster than the Pacific upper mantle, 2) the uppermost mantle beneath Precambrian North America could be one and a half times as rigid as the upper mantle beneath Meso- and Cenozoic North America, with the upper mantle beneath Paleozoic North America being intermediate in seismic rigidity, 3) upper-mantle structure varies laterally within these geologic-age domains, and 4) the distribution of high-velocity anomalies in the deep upper mantle aligns with lower mantle images of the subducted Farallon and Kula plates and indicate that trailing fragments of these subducted oceanic plates still reside in the transition zone. The thickness of the high-velocity layer beneath Precambrian North America is estimated to be 250±70 km thick. On a smaller scale NA04 shows 1) high-velocities associated with subduction of the Pacific plate beneath the Aleutian arc, 2) the absence of expected high velocities in the upper mantle beneath the Wyoming craton, 3) a V-shaped dent below 150 km in the high-velocity cratonic lithosphere beneath New England, 4) the cratonic lithosphere beneath Precambrian North America being confined southwest of Baffin Bay, west of the Appalachians, north of the Ouachitas, east of the Rocky Mountains, and south of the Arctic Ocean, 5) the cratonic lithosphere beneath the Canadian shield having higher S-velocities than that beneath Precambrian basement that is covered with Phanerozoic sediments, 6) the lowest S velocities are concentrated beneath the Gulf of California, northern Mexico, and the Basin and Range Province.

  3. Wavefield simulations of earthquakes in Alaska for tomographic inversion

    NASA Astrophysics Data System (ADS)

    Silwal, V.; Tape, C.; Casarotti, E.

    2017-12-01

    We assemble a catalog of moment tensors and a three-dimensional seismic velocity model for mainland Alaska, in preparation for an iterative tomographic inversion using spectral-element and adjoint methods. The catalog contains approximately 200 earthquakes with Mw ≥ 4.0 that generate good long-period (≥6 s) signals for stations at distances up to approximately 500 km. To maximize the fraction of usable stations per earthquake, we divide our model into three subregions for simulations: south-central Alaska, central Alaska, and eastern Alaska. The primary geometrical interfaces in the model are the Moho surface, the basement surface of major sedimentary basins, and the topographic surface. The crustal and upper mantle tomographic model is from Eberhart-Phillips et al. (2006), but modified by removing the uppermost slow layer, then embedding sedimentary basin models for Cook Inlet basin, Susitna basin, and Nenana basin. We compute 3D synthetic seismograms using the spectral-element method. We demonstrate the accuracy of the initial three-dimensional reference model in each subregion by comparing 3D synthetics with observed data for several earthquakes originating in the crust and underlying subducting slab. Full waveform similarity between data and synthetics over the period range 6 s to 30 s provides a basis for an iterative inversion. The target resolution of the crustal structure is 4 km vertically and 20 km laterally. We use surface wave and body wave measurements from local earthquakes to obtain moment tensors that will be used within our tomographic inversion. Local slab events down to 180 km depth, in additional to pervasive crustal seismicity, should enhance resolution.

  4. Tomographic diffractive microscopy with agile illuminations for imaging targets in a noisy background.

    PubMed

    Zhang, T; Godavarthi, C; Chaumet, P C; Maire, G; Giovannini, H; Talneau, A; Prada, C; Sentenac, A; Belkebir, K

    2015-02-15

    Tomographic diffractive microscopy is a marker-free optical digital imaging technique in which three-dimensional samples are reconstructed from a set of holograms recorded under different angles of incidence. We show experimentally that, by processing the holograms with singular value decomposition, it is possible to image objects in a noisy background that are invisible with classical wide-field microscopy and conventional tomographic reconstruction procedure. The targets can be further characterized with a selective quantitative inversion.

  5. Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods

    NASA Astrophysics Data System (ADS)

    Thurin, J.; Brossier, R.; Métivier, L.

    2017-12-01

    Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012

  6. Three-dimensional tomographic imaging for dynamic radiation behavior study using infrared imaging video bolometers in large helical device plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sano, Ryuichi; Iwama, Naofumi; Peterson, Byron J.

    A three-dimensional (3D) tomography system using four InfraRed imaging Video Bolometers (IRVBs) has been designed with a helical periodicity assumption for the purpose of plasma radiation measurement in the large helical device. For the spatial inversion of large sized arrays, the system has been numerically and experimentally examined using the Tikhonov regularization with the criterion of minimum generalized cross validation, which is the standard solver of inverse problems. The 3D transport code EMC3-EIRENE for impurity behavior and related radiation has been used to produce phantoms for numerical tests, and the relative calibration of the IRVB images has been carried outmore » with a simple function model of the decaying plasma in a radiation collapse. The tomography system can respond to temporal changes in the plasma profile and identify the 3D dynamic behavior of radiation, such as the radiation enhancement that starts from the inboard side of the torus, during the radiation collapse. The reconstruction results are also consistent with the output signals of a resistive bolometer. These results indicate that the designed 3D tomography system is available for the 3D imaging of radiation. The first 3D direct tomographic measurement of a magnetically confined plasma has been achieved.« less

  7. Creating realistic models and resolution assessment in tomographic inversion of wide-angle active seismic profiling data

    NASA Astrophysics Data System (ADS)

    Stupina, T.; Koulakov, I.; Kopp, H.

    2009-04-01

    We consider questions of creating structural models and resolution assessment in tomographic inversion of wide-angle active seismic profiling data. For our investigations, we use the PROFIT (Profile Forward and Inverse Tomographic modeling) algorithm which was tested earlier with different datasets. Here we consider offshore seismic profiling data from three areas (Chile, Java and Central Pacific). Two of the study areas are characterized by subduction zones whereas the third data set covers a seamount province. We have explored different algorithmic issues concerning the quality of the solution, such as (1) resolution assessment using different sizes and complexity of synthetic anomalies; (2) grid spacing effects; (3) amplitude damping and smoothing; (4) criteria for rejection of outliers; (5) quantitative criteria for comparing models. Having determined optimal algorithmic parameters for the observed seismic profiling data we have created structural synthetic models which reproduce the results of the observed data inversion. For the Chilean and Java subduction zones our results show similar patterns: a relatively thin sediment layer on the oceanic plate, thicker inhomogeneous sediments in the overlying plate and a large area of very strong low velocity anomalies in the accretionary wedge. For two seamounts in the Pacific we observe high velocity anomalies in the crust which can be interpreted as frozen channels inside the dormant volcano cones. Along both profiles we obtain considerable crustal thickening beneath the seamounts.

  8. Research in Image Understanding as Applied to 3-D Microwave Tomographic Imaging with Near Optical Resolution.

    DTIC Science & Technology

    1986-03-10

    and P. Frangos , "Inverse Scattering for Dielectric Media", Annual OSA Meeting, Wash. D.C., Oct. 1985. Invited Presentations 1. N. Farhat, "Tomographic...Optical Computing", DARPA Briefing, ~~April 1985. ... -7--.. , 1% If .% P . .% .% *-. 7777~14e 7-7. K-7 77 Theses 0 P.V. Frangos , "The Electromagnetic

  9. High resolution seismic tomography imaging of Ireland with quarry blast data

    NASA Astrophysics Data System (ADS)

    Arroucau, P.; Lebedev, S.; Bean, C. J.; Grannell, J.

    2017-12-01

    Local earthquake tomography is a well established tool to image geological structure at depth. That technique, however, is difficult to apply in slowly deforming regions, where local earthquakes are typically rare and of small magnitude, resulting in sparse data sampling. The natural earthquake seismicity of Ireland is very low. That due to quarry and mining blasts, on the other hand, is high and homogeneously distributed. As a consequence, and thanks to the dense and nearly uniform coverage achieved in the past ten years by temporary and permanent broadband seismological stations, the quarry blasts offer an alternative approach for high resolution seismic imaging of the crust and uppermost mantle beneath Ireland. We detected about 1,500 quarry blasts in Ireland and Northern Ireland between 2011 and 2014, for which we manually picked more than 15,000 P- and 20,000 S-wave first arrival times. The anthropogenic, explosive origin of those events was unambiguously assessed based on location, occurrence time and waveform characteristics. Here, we present a preliminary 3D tomographic model obtained from the inversion of 3,800 P-wave arrival times associated with a subset of 500 events observed in 2011, using FMTOMO tomographic code. Forward modeling is performed with the Fast Marching Method (FMM) and the inverse problem is solved iteratively using a gradient-based subspace inversion scheme after careful selection of damping and smoothing regularization parameters. The results illuminate the geological structure of Ireland from deposit to crustal scale in unprecedented detail, as demonstrated by sensitivity analysis, source relocation with the 3D velocity model and comparisons with surface geology.

  10. Classification of JET Neutron and Gamma Emissivity Profiles

    NASA Astrophysics Data System (ADS)

    Craciunescu, T.; Murari, A.; Kiptily, V.; Vega, J.; Contributors, JET

    2016-05-01

    In thermonuclear plasmas, emission tomography uses integrated measurements along lines of sight (LOS) to determine the two-dimensional (2-D) spatial distribution of the volume emission intensity. Due to the availability of only a limited number views and to the coarse sampling of the LOS, the tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET. In specific experimental conditions the availability of LOSs is restricted to a single view. In this case an explicit reconstruction of the emissivity profile is no longer possible. However, machine learning classification methods can be used in order to derive the type of the distribution. In the present approach the classification is developed using the theory of belief functions which provide the support to fuse the results of independent clustering and supervised classification. The method allows to represent the uncertainty of the results provided by different independent techniques, to combine them and to manage possible conflicts.

  11. Probability density of spatially distributed soil moisture inferred from crosshole georadar traveltime measurements

    NASA Astrophysics Data System (ADS)

    Linde, N.; Vrugt, J. A.

    2009-04-01

    Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.

  12. Systematic evaluation of sequential geostatistical resampling within MCMC for posterior sampling of near-surface geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Ruggeri, Paolo; Irving, James; Holliger, Klaus

    2015-08-01

    We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.

  13. Tomographic Imaging of the Seismic Structure Beneath the East Anatolian Plateau, Eastern Turkey

    NASA Astrophysics Data System (ADS)

    Gökalp, Hüseyin

    2012-10-01

    The high level of seismic activity in eastern Turkey is thought to be mainly associated with the continuing collision of the Arabian and Eurasian tectonic plates. The determination of a detailed three-dimensional (3D) structure is crucial for a better understanding of this on-going collision or subduction process; therefore, a body wave tomographic inversion technique was performed on the region. The tomographic inversion used high quality arrival times from earthquakes occurring in the region from 1999 to 2001 recorded by a temporary 29 station broadband IRIS-PASSCAL array operated by research groups from the Universities of Boğaziçi (Turkey) and Cornell (USA). The data was inverted and consisted of 3,114 P- and 2,298 S-wave arrival times from 252 local events with magnitudes ( M D) ranging from 2.5 to 4.8. The stability and resolution of the results were qualitatively assessed by two synthetic tests: a spike test and checkerboard resolution test and it was found that the models were well resolved for most parts of the imaged domain. The tomographic inversion results reveal significant lateral heterogeneities in the study area to a depth of ~20 km. The P- and S-wave velocity models are consistent with each other and provide evidence for marked heterogeneities in the upper crustal structure beneath eastern Turkey. One of the most important features in the acquired tomographic images is the high velocity anomalies, which are generally parallel to the main tectonic units in the region, existing at shallow depths. This may relate to the existence of ophiolitic units at shallow depths. The other feature is that low velocities are widely dispersed through the 3D structure beneath the region at deeper crustal depths. This feature can be an indicator of the mantle upwelling or support the hypothesis that the Anatolian Plateau is underlain by a partially molten uppermost mantle.

  14. Mantle Circulation Models with variational data assimilation: Inferring past mantle flow and structure from plate motion histories and seismic tomography

    NASA Astrophysics Data System (ADS)

    Bunge, H.; Hagelberg, C.; Travis, B.

    2002-12-01

    EarthScope will deliver data on structure and dynamics of continental North America and the underlying mantle on an unprecedented scale. Indeed, the scope of EarthScope makes its mission comparable to the large remote sensing efforts that are transforming the oceanographic and atmospheric sciences today. Arguably the main impact of new solid Earth observing systems is to transform our use of geodynamic models increasingly from conditions that are data poor to an environment that is data rich. Oceanographers and meteorologists already have made substantial progress in adapting to this environment, by developing new approaches of interpreting oceanographic and atmospheric data objectively through data assimilation methods in their models. However, a similarly rigorous theoretical framework for merging EarthScope derived solid Earth data with geodynamic models has yet to be devised. Here we explore the feasibility of data assimilation in mantle convection studies in an attempt to fit global geodynamic model calculations explicitly to tomographic and tectonic constraints. This is an inverse problem not quite unlike the inverse problem of finding optimal seismic velocity structures faced by seismologists. We derive the generalized inverse of mantle convection from a variational approach and present the adjoint equations of mantle flow. The substantial computational burden associated with solutions to the generalized inverse problem of mantle convection is made feasible using a highly efficient finite element approach based on the 3-D spherical fully parallelized mantle dynamics code TERRA, implemented on a cost-effective topical PC-cluster (geowulf) dedicated specifically to large-scale geophysical simulations. This dedicated geophysical modeling computer allows us to investigate global inverse convection problems having a spatial discretization of less than 50 km throughout the mantle. We present a synthetic high-resolution modeling experiment to demonstrate that mid-Cretaceous mantle structure can be inferred accurately from our inverse approach assuming present-day mantle structure is well-known, even if an initial first guess assumption about the mid-Cretaceous mantle involved only a simple 1-D radial temperature profile. We suggest that geodynamic inverse modeling should make it possible to infer a number of flow parameters from observational constraints of the mantle.

  15. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  16. Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale

    2012-10-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  17. Tomographic reconstruction of tokamak plasma light emission from single image using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.

    2012-01-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  18. Crustal Structure Beneath Taiwan Using Frequency-band Inversion of Receiver Function Waveforms

    NASA Astrophysics Data System (ADS)

    Tomfohrde, D. A.; Nowack, R. L.

    Receiver function analysis is used to determine local crustal structure beneath Taiwan. We have performed preliminary data processing and polarization analysis for the selection of stations and events and to increase overall data quality. Receiver function analysis is then applied to data from the Taiwan Seismic Network to obtain radial and transverse receiver functions. Due to the limited azimuthal coverage, only the radial receiver functions are analyzed in terms of horizontally layered crustal structure for each station. In order to improve convergence of the receiver function inversion, frequency-band inversion (FBI) is implemented, in which an iterative inversion procedure with sequentially higher low-pass corner frequencies is used to stabilize the waveform inversion. Frequency-band inversion is applied to receiver functions at six stations of the Taiwan Seismic Network. Initial 20-layer crustal models are inverted for using prior tomographic results for the initial models. The resulting 20-1ayer models are then simplified to 4 to 5 layer models and input into an alternating depth and velocity frequency-band inversion. For the six stations investigated, the resulting simplified models provide an average estimate of 38 km for the Moho thickness surrounding the Central Range of Taiwan. Also, the individual station estimates compare well with the recent tomographic model of and the refraction results of Rau and Wu (1995) and the refraction results of Ma and Song (1997).

  19. Frequency-domain optical tomographic image reconstruction algorithm with the simplified spherical harmonics (SP3) light propagation model.

    PubMed

    Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H

    2017-06-01

    We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .

  20. Pseudodynamic systems approach based on a quadratic approximation of update equations for diffuse optical tomography.

    PubMed

    Biswas, Samir Kumar; Kanhirodan, Rajan; Vasu, Ram Mohan; Roy, Debasish

    2011-08-01

    We explore a pseudodynamic form of the quadratic parameter update equation for diffuse optical tomographic reconstruction from noisy data. A few explicit and implicit strategies for obtaining the parameter updates via a semianalytical integration of the pseudodynamic equations are proposed. Despite the ill-posedness of the inverse problem associated with diffuse optical tomography, adoption of the quadratic update scheme combined with the pseudotime integration appears not only to yield higher convergence, but also a muted sensitivity to the regularization parameters, which include the pseudotime step size for integration. These observations are validated through reconstructions with both numerically generated and experimentally acquired data.

  1. Validation of Spherically Symmetric Inversion by Use of a Tomographically Reconstructed Three-Dimensional Electron Density of the Solar Corona

    NASA Technical Reports Server (NTRS)

    Wang, Tongjiang; Davila, Joseph M.

    2014-01-01

    Determining the coronal electron density by the inversion of white-light polarized brightness (pB) measurements by coronagraphs is a classic problem in solar physics. An inversion technique based on the spherically symmetric geometry (spherically symmetric inversion, SSI) was developed in the 1950s and has been widely applied to interpret various observations. However, to date there is no study of the uncertainty estimation of this method. We here present the detailed assessment of this method using a three-dimensional (3D) electron density in the corona from 1.5 to 4 solar radius as a model, which is reconstructed by a tomography method from STEREO/COR1 observations during the solar minimum in February 2008 (Carrington Rotation, CR 2066).We first show in theory and observation that the spherically symmetric polynomial approximation (SSPA) method and the Van de Hulst inversion technique are equivalent. Then we assess the SSPA method using synthesized pB images from the 3D density model, and find that the SSPA density values are close to the model inputs for the streamer core near the plane of the sky (POS) with differences generally smaller than about a factor of two; the former has the lower peak but extends more in both longitudinal and latitudinal directions than the latter. We estimate that the SSPA method may resolve the coronal density structure near the POS with angular resolution in longitude of about 50 deg. Our results confirm the suggestion that the SSI method is applicable to the solar minimum streamer (belt), as stated in some previous studies. In addition, we demonstrate that the SSPA method can be used to reconstruct the 3D coronal density, roughly in agreement with the reconstruction by tomography for a period of low solar activity (CR 2066). We suggest that the SSI method is complementary to the 3D tomographic technique in some cases, given that the development of the latter is still an ongoing research effort.

  2. Anisotropic S-wave velocity structure from joint inversion of surface wave group velocity dispersion: A case study from India

    NASA Astrophysics Data System (ADS)

    Mitra, S.; Dey, S.; Siddartha, G.; Bhattacharya, S.

    2016-12-01

    We estimate 1-dimensional path average fundamental mode group velocity dispersion curves from regional Rayleigh and Love waves sampling the Indian subcontinent. The path average measurements are combined through a tomographic inversion to obtain 2-dimensional group velocity variation maps between periods of 10 and 80 s. The region of study is parametrised as triangular grids with 1° sides for the tomographic inversion. Rayleigh and Love wave dispersion curves from each node point is subsequently extracted and jointly inverted to obtain a radially anisotropic shear wave velocity model through global optimisation using Genetic Algorithm. The parametrization of the model space is done using three crustal layers and four mantle layers over a half-space with varying VpH , VsV and VsH. The anisotropic parameter (η) is calculated from empirical relations and the density of the layers are taken from PREM. Misfit for the model is calculated as a sum of error-weighted average dispersion curves. The 1-dimensional anisotropic shear wave velocity at each node point is combined using linear interpolation to obtain 3-dimensional structure beneath the region. Synthetic tests are performed to estimate the resolution of the tomographic maps which will be presented with our results. We envision to extend this to a larger dataset in near future to obtain high resolution anisotrpic shear wave velocity structure beneath India, Himalaya and Tibet.

  3. Object-based inversion of crosswell radar tomography data to monitor vegetable oil injection experiments

    USGS Publications Warehouse

    Lane, John W.; Day-Lewis, Frederick D.; Versteeg, Roelof J.; Casey, Clifton C.

    2004-01-01

    Crosswell radar methods can be used to dynamically image ground-water flow and mass transport associated with tracer tests, hydraulic tests, and natural physical processes, for improved characterization of preferential flow paths and complex aquifer heterogeneity. Unfortunately, because the raypath coverage of the interwell region is limited by the borehole geometry, the tomographic inverse problem is typically underdetermined, and tomograms may contain artifacts such as spurious blurring or streaking that confuse interpretation.We implement object-based inversion (using a constrained, non-linear, least-squares algorithm) to improve results from pixel-based inversion approaches that utilize regularization criteria, such as damping or smoothness. Our approach requires pre- and post-injection travel-time data. Parameterization of the image plane comprises a small number of objects rather than a large number of pixels, resulting in an overdetermined problem that reduces the need for prior information. The nature and geometry of the objects are based on hydrologic insight into aquifer characteristics, the nature of the experiment, and the planned use of the geophysical results.The object-based inversion is demonstrated using synthetic and crosswell radar field data acquired during vegetable-oil injection experiments at a site in Fridley, Minnesota. The region where oil has displaced ground water is discretized as a stack of rectangles of variable horizontal extents. The inversion provides the geometry of the affected region and an estimate of the radar slowness change for each rectangle. Applying petrophysical models to these results and porosity from neutron logs, we estimate the vegetable-oil emulsion saturation in various layers.Using synthetic- and field-data examples, object-based inversion is shown to be an effective strategy for inverting crosswell radar tomography data acquired to monitor the emplacement of vegetable-oil emulsions. A principal advantage of object-based inversion is that it yields images that hydrologists and engineers can easily interpret and use for model calibration.

  4. Validation of Special Sensor Ultraviolet Limb Imager (SSULI) Ionospheric Tomography using ALTAIR Incoherent Scatter Radar Measurements

    NASA Astrophysics Data System (ADS)

    Dymond, K.; Nicholas, A. C.; Budzien, S. A.; Stephan, A. W.; Coker, C.; Hei, M. A.; Groves, K. M.

    2015-12-01

    The Special Sensor Ultraviolet Limb Imager (SSULI) instruments are ultraviolet limb scanning sensors flying on the Defense Meteorological Satellite Program (DMSP) satellites. The SSULIs observe the 80-170 nanometer wavelength range covering emissions at 91 and 136 nm, which are produced by radiative recombination of the ionosphere. We invert these emissions tomographically using newly developed algorithms that include optical depth effects due to pure absorption and resonant scattering. We present the details of our approach including how the optimal altitude and along-track sampling were determined and the newly developed approach we are using for regularizing the SSULI tomographic inversions. Finally, we conclude with validations of the SSULI inversions against ALTAIR incoherent scatter radar measurements and demonstrate excellent agreement between the measurements.

  5. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  6. Nonlinear inversion of borehole-radar tomography data to reconstruct velocity and attenuation distribution in earth materials

    USGS Publications Warehouse

    Zhou, C.; Liu, L.; Lane, J.W.

    2001-01-01

    A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.

  7. Blind test of methods for obtaining 2-D near-surface seismic velocity models from first-arrival traveltimes

    USGS Publications Warehouse

    Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.

    2013-01-01

    Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.

  8. Cyberinfrastructure for the Unified Study of Earth Structure and Earthquake Sources in Complex Geologic Environments

    NASA Astrophysics Data System (ADS)

    Zhao, L.; Chen, P.; Jordan, T. H.; Olsen, K. B.; Maechling, P.; Faerman, M.

    2004-12-01

    The Southern California Earthquake Center (SCEC) is developing a Community Modeling Environment (CME) to facilitate the computational pathways of physics-based seismic hazard analysis (Maechling et al., this meeting). Major goals are to facilitate the forward modeling of seismic wavefields in complex geologic environments, including the strong ground motions that cause earthquake damage, and the inversion of observed waveform data for improved models of Earth structure and fault rupture. Here we report on a unified approach to these coupled inverse problems that is based on the ability to generate and manipulate wavefields in densely gridded 3D Earth models. A main element of this approach is a database of receiver Green tensors (RGT) for the seismic stations, which comprises all of the spatial-temporal displacement fields produced by the three orthogonal unit impulsive point forces acting at each of the station locations. Once the RGT database is established, synthetic seismograms for any earthquake can be simply calculated by extracting a small, source-centered volume of the RGT from the database and applying the reciprocity principle. The partial derivatives needed for point- and finite-source inversions can be generated in the same way. Moreover, the RGT database can be employed in full-wave tomographic inversions launched from a 3D starting model, because the sensitivity (Fréchet) kernels for travel-time and amplitude anomalies observed at seismic stations in the database can be computed by convolving the earthquake-induced displacement field with the station RGTs. We illustrate all elements of this unified analysis with an RGT database for 33 stations of the California Integrated Seismic Network in and around the Los Angeles Basin, which we computed for the 3D SCEC Community Velocity Model (SCEC CVM3.0) using a fourth-order staggered-grid finite-difference code. For a spatial grid spacing of 200 m and a time resolution of 10 ms, the calculations took ~19,000 node-hours on the Linux cluster at USC's High-Performance Computing Center. The 33-station database with a volume of ~23.5 TB was archived in the SCEC digital library at the San Diego Supercomputer Center using the Storage Resource Broker (SRB). From a laptop, anyone with access to this SRB collection can compute synthetic seismograms for an arbitrary source in the CVM in a matter of minutes. Efficient approaches have been implemented to use this RGT database in the inversions of waveforms for centroid and finite moment tensors and tomographic inversions to improve the CVM. Our experience with these large problems suggests areas where the cyberinfrastructure currently available for geoscience computation needs to be improved.

  9. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?

    PubMed Central

    Pan, Xiaochuan; Sidky, Emil Y; Vannier, Michael

    2010-01-01

    Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues. PMID:20376330

  10. TOPICAL REVIEW: Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?

    NASA Astrophysics Data System (ADS)

    Pan, Xiaochuan; Sidky, Emil Y.; Vannier, Michael

    2009-12-01

    Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues.

  11. Can we go From Tomographically Determined Seismic Velocities to Composition? Amplitude Resolution Issues in Local Earthquake Tomography

    NASA Astrophysics Data System (ADS)

    Wagner, L.

    2007-12-01

    There have been a number of recent papers (i.e. Lee (2003), James et al. (2004), Hacker and Abers (2004), Schutt and Lesher (2006)) which calculate predicted velocities for xenolith compositions at mantle pressures and temperatures. It is tempting, therefore, to attempt to go the other way ... to use tomographically determined absolute velocities to constrain mantle composition. However, in order to do this, it is vital that one is able to accurately constrain not only the polarity of the determined velocity deviations (i.e. fast vs slow) but also how much faster, how much slower relative to the starting model, if absolute velocities are to be so closely analyzed. While much attention has been given to issues concerning spatial resolution in seismic tomography (i.e. what areas are fast, what areas are slow), little attention has been directed at the issue of amplitude resolution (how fast, how slow). Velocity deviation amplitudes in seismic tomography are heavily influenced by the amount of regularization used and the number of iterations performed. Determining these two parameters is a difficult and little discussed problem. I explore the effect of these two parameters on the amplitudes obtained from the tomographic inversion of the Chile Argentina Geophysical Experiment (CHARGE) dataset, and attempt to determine a reasonable solution space for the low Vp, high Vs, low Vp/Vs anomaly found above the flat slab in central Chile. I then compare this solution space to the range in experimentally determined velocities for peridotite end-members to evaluate our ability to constrain composition using tomographically determined seismic velocities. I find that in general, it will be difficult to constrain the compositions of normal mantle peridotites using tomographically determined velocities, but that in the unusual case of the anomaly above the flat slab, the observed velocity structure still has an anomalously high S wave velocity and low Vp/Vs ratio that is most consistent with enstatite, but inconsistent with the predicted velocities of known mantle xenoliths.

  12. Initial evaluation of discrete orthogonal basis reconstruction of ECT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, E.B.; Donohue, K.D.

    1996-12-31

    Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less

  13. Tomographic imaging of Central Java, Indonesia: Preliminary result of joint inversion of the MERAMEX and MCGA earthquake data

    NASA Astrophysics Data System (ADS)

    Rohadi, Supriyanto; Widiyantoro, Sri; Nugraha, Andri Dian; Masturyono

    2013-09-01

    The realization of local earthquake tomography is usually conducted by removing distant events outside the study region, because these events may increase errors. In this study, tomographic inversion has been conducted using the travel time data of local and regional events in order to improve the structural resolution, especially for deep structures. We used the local MERapi Amphibious EXperiments (MERAMEX) data catalog that consists of 292 events from May to October 2004. The additional new data of regional events in the Java region were taken from the Meteorological, Climatological, and Geophysical Agency (MCGA) of Indonesia, which consist of 882 events, having at least 10 recording phases at each seismographic station from April 2009 to February 2011. We have conducted joint inversions of the combined data sets using double-difference tomography to invert for velocity structures and to conduct hypocenter relocation simultaneously. The checkerboard test results of Vp and Vs structures demonstrate a significantly improved spatial resolution from the shallow crust down to a depth of 165 km. Our tomographic inversions reveal a low velocity anomaly beneath the Lawu - Merapi zone, which is consistent with the results from previous studies. A strong velocity anomaly zone with low Vp, low Vs and low Vp/Vs is also identified between Cilacap and Banyumas. We interpret this anomaly as a fluid content material with large aspect ratio or sediment layer. This anomaly zone is in a good agreement with the existence of a large dome containing sediment in this area as proposed by previous geological studies. A low velocity anomaly zone is also detected in Kebumen, where it may be related to the extensional oceanic basin toward the land.

  14. Tomographic imaging of Central Java, Indonesia: Preliminary result of joint inversion of the MERAMEX and MCGA earthquake data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohadi, Supriyanto; Widiyantoro, Sri; Nugraha, Andri Dian

    The realization of local earthquake tomography is usually conducted by removing distant events outside the study region, because these events may increase errors. In this study, tomographic inversion has been conducted using the travel time data of local and regional events in order to improve the structural resolution, especially for deep structures. We used the local MERapi Amphibious EXperiments (MERAMEX) data catalog that consists of 292 events from May to October 2004. The additional new data of regional events in the Java region were taken from the Meteorological, Climatological, and Geophysical Agency (MCGA) of Indonesia, which consist of 882 events,more » having at least 10 recording phases at each seismographic station from April 2009 to February 2011. We have conducted joint inversions of the combined data sets using double-difference tomography to invert for velocity structures and to conduct hypocenter relocation simultaneously. The checkerboard test results of Vp and Vs structures demonstrate a significantly improved spatial resolution from the shallow crust down to a depth of 165 km. Our tomographic inversions reveal a low velocity anomaly beneath the Lawu - Merapi zone, which is consistent with the results from previous studies. A strong velocity anomaly zone with low Vp, low Vs and low Vp/Vs is also identified between Cilacap and Banyumas. We interpret this anomaly as a fluid content material with large aspect ratio or sediment layer. This anomaly zone is in a good agreement with the existence of a large dome containing sediment in this area as proposed by previous geological studies. A low velocity anomaly zone is also detected in Kebumen, where it may be related to the extensional oceanic basin toward the land.« less

  15. Assessing the resolution-dependent utility of tomograms for geostatistics

    USGS Publications Warehouse

    Day-Lewis, F. D.; Lane, J.W.

    2004-01-01

    Geophysical tomograms are used increasingly as auxiliary data for geostatistical modeling of aquifer and reservoir properties. The correlation between tomographic estimates and hydrogeologic properties is commonly based on laboratory measurements, co-located measurements at boreholes, or petrophysical models. The inferred correlation is assumed uniform throughout the interwell region; however, tomographic resolution varies spatially due to acquisition geometry, regularization, data error, and the physics underlying the geophysical measurements. Blurring and inversion artifacts are expected in regions traversed by few or only low-angle raypaths. In the context of radar traveltime tomography, we derive analytical models for (1) the variance of tomographic estimates, (2) the spatially variable correlation with a hydrologic parameter of interest, and (3) the spatial covariance of tomographic estimates. Synthetic examples demonstrate that tomograms of qualitative value may have limited utility for geostatistics; moreover, the imprint of regularization may preclude inference of meaningful spatial statistics from tomograms.

  16. Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Nowack, Robert L.; Li, Cuiping

    The inversion of seismic travel-time data for radially varying media was initially investigated by Herglotz, Wiechert, and Bateman (the HWB method) in the early part of the 20th century [1]. Tomographic inversions for laterally varying media began in seismology starting in the 1970’s. This included early work by Aki, Christoffersson, and Husebye who developed an inversion technique for estimating lithospheric structure beneath a seismic array from distant earthquakes (the ACH method) [2]. Also, Alekseev and others in Russia performed early inversions of refraction data for laterally varying upper mantle structure [3]. Aki and Lee [4] developed an inversion technique using travel-time data from local earthquakes.

  17. Uncertainty quantification of CO₂ saturation estimated from electrical resistance tomography data at the Cranfield site

    DOE PAGES

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less

  18. Electrical resistance tomography from measurements inside a steel cased borehole

    DOEpatents

    Daily, William D.; Schenkel, Clifford; Ramirez, Abelardo L.

    2000-01-01

    Electrical resistance tomography (ERT) produced from measurements taken inside a steel cased borehole. A tomographic inversion of electrical resistance measurements made within a steel casing was then made for the purpose of imaging the electrical resistivity distribution in the formation remotely from the borehole. The ERT method involves combining electrical resistance measurements made inside a steel casing of a borehole to determine the electrical resistivity in the formation adjacent to the borehole; and the inversion of electrical resistance measurements made from a borehole not cased with an electrically conducting casing to determine the electrical resistivity distribution remotely from a borehole. It has been demonstrated that by using these combined techniques, highly accurate current injection and voltage measurements, made at appropriate points within the casing, can be tomographically inverted to yield useful information outside the borehole casing.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Zichao; Leyffer, Sven; Wild, Stefan M.

    Fluorescence tomographic reconstruction, based on the detection of photons coming from fluorescent emission, can be used for revealing the internal elemental composition of a sample. On the other hand, conventional X-ray transmission tomography can be used for reconstructing the spatial distribution of the absorption coefficient inside a sample. In this work, we integrate both X-ray fluorescence and X-ray transmission data modalities and formulate a nonlinear optimization-based approach for reconstruction of the elemental composition of a given object. This model provides a simultaneous reconstruction of both the quantitative spatial distribution of all elements and the absorption effect in the sample. Mathematicallymore » speaking, we show that compared with the single-modality inversion (i.e., the X-ray transmission or fluorescence alone), the joint inversion provides a better-posed problem, which implies a better recovery. Therefore, the challenges in X-ray fluorescence tomography arising mainly from the effects of self-absorption in the sample are partially mitigated. The use of this technique is demonstrated on the reconstruction of several synthetic samples.« less

  20. GPS water vapor project associated to the ESCOMPTE programme: description and first results of the field experiment

    NASA Astrophysics Data System (ADS)

    Bock, O.; Doerflinger, E.; Masson, F.; Walpersdorf, A.; Van-Baelen, J.; Tarniewicz, J.; Troller, M.; Somieski, A.; Geiger, A.; Bürki, B.

    A dense network of 17 dual frequency GPS receivers has been operated for two weeks during June 2001 within a 20 km × 20 km area around Marseille, France, as part of the ESCOMPTE field campaign ([Cros et al., 2004. The ESCOMPTE program: an overview. Atmos. Res. 69, 241-279]; http://medias.obs-mip.fr/escompte). The goal of this GPS experiment was to provide GPS data allowing for tomographic inversions and their validation within a well-documented observing period (the ESCOMPTE campaign). Simultaneous water vapor radiometer, solar spectrometer, Raman lidar and radiosonde data are used for comparison and validation. In this paper, we highlight the motivation, issues and describe the GPS field experiment. Some first results of integrated water vapor retrievals from GPS and the other sensing techniques are presented. The strategies for GPS data processing and tomographic inversions are discussed.

  1. Fast tomographic methods for the tokamak ISTTOK

    NASA Astrophysics Data System (ADS)

    Carvalho, P. J.; Thomsen, H.; Gori, S.; Toussaint, U. v.; Weller, A.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.

    2008-04-01

    The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.

  2. East Pacific Rise axial structure from a joint tomographic inversion of traveltimes picked on downward continued and standard shot gathers collected by 3D MCS surveying

    NASA Astrophysics Data System (ADS)

    Newman, Kori; Nedimović, Mladen; Delescluse, Matthias; Menke, William; Canales, J. Pablo; Carbotte, Suzanne; Carton, Helene; Mutter, John

    2010-05-01

    We present traveltime tomographic models along closely spaced (~250 m), strike-parallel profiles that flank the axis of the East Pacific Rise at 9°41' - 9°57' N. The data were collected during a 3D (multi-streamer) multichannel seismic (MCS) survey of the ridge. Four 6-km long hydrophone streamers were towed by the ship along three along-axis sail lines, yielding twelve possible profiles over which to compute tomographic models. Based on the relative location between source-receiver midpoints and targeted subsurface structures, we have chosen to compute models for four of those lines. MCS data provide for a high density of seismic ray paths with which to constrain the model. Potentially, travel times for ~250,000 source-receiver pairs can be picked over the 30 km length of each model. However, such data density does not enhance the model resolution, so, for computational efficiency, the data are decimated so that ~15,000 picks per profile are used. Downward continuation of the shot gathers simulates an experimental geometry in which the sources and receivers are positioned just above the sea floor. This allows the shallowest sampling refracted arrivals to be picked and incorporated into the inversion whereas they would otherwise not be usable with traditional first-arrival travel-time tomographic techniques. Some of the far-offset deep-penetrating 2B refractions cannot be picked on the downward continued gathers due to signal processing artifacts. For this reason, we run a joint inversion by also including 2B traveltime picks from standard shot gathers. Uppermost velocity structure (seismic layer 2A thickness and velocity) is primarily constrained from 1D inversion of the nearest offset (<500 m) source-receiver travel-time picks for each downward continued shot gather. Deeper velocities are then computed in a joint 2D inversion that uses all picks from standard and downward continued shot gathers and incorporates the 1D results into the starting model. The resulting velocity models extend ~1 km into the crust. Preliminary results show thicker layer 2A and faster layer 2A velocities at fourth order ridge segment boundaries. Additionally, layer 2A thickens north of 9° 52' N, which is consistent with earlier investigations of this ridge segment. Slower layer 2B velocities are resolved in the vicinity of documented hydrothermal vent fields. We anticipate that additional analyses of the results will yield further insight into fine scale variations in near-axis mid-ocean ridge structure.

  3. Using artificial neural networks (ANN) for open-loop tomography

    NASA Astrophysics Data System (ADS)

    Osborn, James; De Cos Juez, Francisco Javier; Guzman, Dani; Butterley, Timothy; Myers, Richard; Guesalaga, Andres; Laine, Jesus

    2011-09-01

    The next generation of adaptive optics (AO) systems require tomographic techniques in order to correct for atmospheric turbulence along lines of sight separated from the guide stars. Multi-object adaptive optics (MOAO) is one such technique. Here, we present a method which uses an artificial neural network (ANN) to reconstruct the target phase given off-axis references sources. This method does not require any input of the turbulence profile and is therefore less susceptible to changing conditions than some existing methods. We compare our ANN method with a standard least squares type matrix multiplication method (MVM) in simulation and find that the tomographic error is similar to the MVM method. In changing conditions the tomographic error increases for MVM but remains constant with the ANN model and no large matrix inversions are required.

  4. Tomographic Reconstruction from a Few Views: A Multi-Marginal Optimal Transport Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, I., E-mail: isabelle.abraham@cea.fr; Abraham, R., E-mail: romain.abraham@univ-orleans.fr; Bergounioux, M., E-mail: maitine.bergounioux@univ-orleans.fr

    2017-02-15

    In this article, we focus on tomographic reconstruction. The problem is to determine the shape of the interior interface using a tomographic approach while very few X-ray radiographs are performed. We use a multi-marginal optimal transport approach. Preliminary numerical results are presented.

  5. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  6. Seismic tomography of the southern California crust based on spectral-element and adjoint methods

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Maggi, Alessia; Tromp, Jeroen

    2010-01-01

    We iteratively improve a 3-D tomographic model of the southern California crust using numerical simulations of seismic wave propagation based on a spectral-element method (SEM) in combination with an adjoint method. The initial 3-D model is provided by the Southern California Earthquake Center. The data set comprises three-component seismic waveforms (i.e. both body and surface waves), filtered over the period range 2-30 s, from 143 local earthquakes recorded by a network of 203 stations. Time windows for measurements are automatically selected by the FLEXWIN algorithm. The misfit function in the tomographic inversion is based on frequency-dependent multitaper traveltime differences. The gradient of the misfit function and related finite-frequency sensitivity kernels for each earthquake are computed using an adjoint technique. The kernels are combined using a source subspace projection method to compute a model update at each iteration of a gradient-based minimization algorithm. The inversion involved 16 iterations, which required 6800 wavefield simulations. The new crustal model, m16, is described in terms of independent shear (VS) and bulk-sound (VB) wave speed variations. It exhibits strong heterogeneity, including local changes of +/-30 per cent with respect to the initial 3-D model. The model reveals several features that relate to geological observations, such as sedimentary basins, exhumed batholiths, and contrasting lithologies across faults. The quality of the new model is validated by quantifying waveform misfits of full-length seismograms from 91 earthquakes that were not used in the tomographic inversion. The new model provides more accurate synthetic seismograms that will benefit seismic hazard assessment.

  7. Adjoint Tomography of the Southern California Crust (Invited) (Invited)

    NASA Astrophysics Data System (ADS)

    Tape, C.; Liu, Q.; Maggi, A.; Tromp, J.

    2009-12-01

    We iteratively improve a three-dimensional tomographic model of the southern California crust using numerical simulations of seismic wave propagation based on a spectral-element method (SEM) in combination with an adjoint method. The initial 3D model is provided by the Southern California Earthquake Center. The dataset comprises three-component seismic waveforms (i.e. both body and surface waves), filtered over the period range 2-30 s, from 143 local earthquakes recorded by a network of 203 stations. Time windows for measurements are automatically selected by the FLEXWIN algorithm. The misfit function in the tomographic inversion is based on frequency-dependent multitaper traveltime differences. The gradient of the misfit function and related finite-frequency sensitivity kernels for each earthquake are computed using an adjoint technique. The kernels are combined using a source subspace projection method to compute a model update at each iteration of a gradient-based minimization algorithm. The inversion involved 16 iterations, which required 6800 wavefield simulations and a total of 0.8 million CPU hours. The new crustal model, m16, is described in terms of independent shear (Vs) and bulk-sound (Vb) wavespeed variations. It exhibits strong heterogeneity, including local changes of ±30% with respect to the initial 3D model. The model reveals several features that relate to geologic observations, such as sedimentary basins, exhumed batholiths, and contrasting lithologies across faults. The quality of the new model is validated by quantifying waveform misfits of full-length seismograms from 91 earthquakes that were not used in the tomographic inversion. The new model provides more accurate synthetic seismograms that will benefit seismic hazard assessment.

  8. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.

    PubMed

    Fessler, J A; Booth, S D

    1999-01-01

    Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.

  9. Double-Difference Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Orsvuran, R.; Bozdag, E.; Lei, W.; Tromp, J.

    2017-12-01

    The adjoint method allows us to incorporate full waveform simulations in inverse problems. Misfit functions play an important role in extracting the relevant information from seismic waveforms. In this study, our goal is to apply the Double-Difference (DD) methodology proposed by Yuan et al. (2016) to global adjoint tomography. Dense seismic networks, such as USArray, lead to higher-resolution seismic images underneath continents. However, the imbalanced distribution of stations and sources poses challenges in global ray coverage. We adapt double-difference multitaper measurements to global adjoint tomography. We normalize each DD measurement by its number of pairs, and if a measurement has no pair, as may frequently happen for data recorded at oceanic stations, classical multitaper measurements are used. As a result, the differential measurements and pair-wise weighting strategy help balance uneven global kernel coverage. Our initial experiments with minor- and major-arc surface waves show promising results, revealing more pronounced structure near dense networks while reducing the prominence of paths towards cluster of stations. We have started using this new measurement in global adjoint inversions, addressing azimuthal anisotropy in upper mantle. Meanwhile, we are working on combining the double-difference approach with instantaneous phase measurements to emphasize contributions of scattered waves in global inversions and extending it to body waves. We will present our results and discuss challenges and future directions in the context of global tomographic inversions.

  10. Application of the GNSS-R in tomographic sounding of the Earth atmosphere

    NASA Astrophysics Data System (ADS)

    Jaberi Shafei, Milad; Mashhadi-Hossainali, Masoud

    2018-07-01

    Reflected GNSS signals offer a great opportunity for detecting and monitoring of water level variation, land surface roughness and the atmosphere around the Earth. The application type intensely depends on satellites' geometry and the topography of study area. GNSS-R can be used in sounding the water vapor as one of the most important parameters in troposphere. In view of temporal and spatial changes, retrieval of this parameter is complicated. GNSS tomography is a common approach for this purpose. Considering the dependency of this inverse approach to the number of stations and satellites' coverage at study area, tomographic reconstruction of water vapor is an ill-posed problem. Additional constraints are usually used to find a solution. In this research reflected signals known as GNSS-R are offered for the first time to resolve the rank deficiency of this problem. This has been implemented to a tomographic model which has been already developed for modeling the water vapor in the North West of Iran. In view of low number of GPS stations in this area, the design matrix of the model is rank deficient. Simulated results demonstrate that the rank deficiency of this matrix can be reduced by implementing appropriate number of GNSS-R stations when the spatial resolution of model is optimized. Resolution matrix is used as a measure for analyzing the efficiency of the proposed method. Results from DOY 300 and 301 in year 2011 show that the applied method can even remedy the rank deficiency of the design matrix. The satellites' constellation and the time response of the model are the effective parameters in this respect. On average the rank deficiency of the design matrix is improved more than 90% when the reflected signals are used. This is easily seen in terms of the resolution matrix of the model. Here, the mean bias and RMSE of reconstructed image are 0.2593 and 1.847 ppm, respectively.

  11. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    NASA Astrophysics Data System (ADS)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  12. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  13. Single photon emission computed tomography-guided Cerenkov luminescence tomography

    NASA Astrophysics Data System (ADS)

    Hu, Zhenhua; Chen, Xueli; Liang, Jimin; Qu, Xiaochao; Chen, Duofang; Yang, Weidong; Wang, Jing; Cao, Feng; Tian, Jie

    2012-07-01

    Cerenkov luminescence tomography (CLT) has become a valuable tool for preclinical imaging because of its ability of reconstructing the three-dimensional distribution and activity of the radiopharmaceuticals. However, it is still far from a mature technology and suffers from relatively low spatial resolution due to the ill-posed inverse problem for the tomographic reconstruction. In this paper, we presented a single photon emission computed tomography (SPECT)-guided reconstruction method for CLT, in which a priori information of the permissible source region (PSR) from SPECT imaging results was incorporated to effectively reduce the ill-posedness of the inverse reconstruction problem. The performance of the method was first validated with the experimental reconstruction of an adult athymic nude mouse implanted with a Na131I radioactive source and an adult athymic nude mouse received an intravenous tail injection of Na131I. A tissue-mimic phantom based experiment was then conducted to illustrate the ability of the proposed method in resolving double sources. Compared with the traditional PSR strategy in which the PSR was determined by the surface flux distribution, the proposed method obtained much more accurate and encouraging localization and resolution results. Preliminary results showed that the proposed SPECT-guided reconstruction method was insensitive to the regularization methods and ignored the heterogeneity of tissues which can avoid the segmentation procedure of the organs.

  14. Coupled Hydrogeophysical Inversion and Hydrogeological Data Fusion

    NASA Astrophysics Data System (ADS)

    Cirpka, O. A.; Schwede, R. L.; Li, W.

    2012-12-01

    Tomographic geophysical monitoring methods give the opportunity to observe hydrogeological tests at higher spatial resolution than is possible with classical hydraulic monitoring tools. This has been demonstrated in a substantial number of studies in which electrical resistivity tomography (ERT) has been used to monitor salt-tracer experiments. It is now accepted that inversion of such data sets requires a fully coupled framework, explicitly accounting for the hydraulic processes (groundwater flow and solute transport), the relationship between solute and geophysical properties (petrophysical relationship such as Archie's law), and the governing equations of the geophysical surveying techniques (e.g., the Poisson equation) as consistent coupled system. These data sets can be amended with data from other - more direct - hydrogeological tests to infer the distribution of hydraulic aquifer parameters. In the inversion framework, meaningful condensation of data does not only contribute to inversion efficiency but also increases the stability of the inversion. In particular, transient concentration data themselves only weakly depend on hydraulic conductivity, and model improvement using gradient-based methods is only possible when a substantial agreement between measurements and model output already exists. The latter also holds when concentrations are monitored by ERT. Tracer arrival times, by contrast, show high sensitivity and a more monotonic dependence on hydraulic conductivity than concentrations themselves. Thus, even without using temporal-moment generating equations, inverting travel times rather than concentrations or related geoelectrical signals themselves is advantageous. We have applied this approach to concentrations measured directly or via ERT, and to heat-tracer data. We present a consistent inversion framework including temporal moments of concentrations, geoelectrical signals obtained during salt-tracer tests, drawdown data from hydraulic tomography and flowmeter measurements to identify mainly the hydraulic-conductivity distribution. By stating the inversion as geostatistical conditioning problem, we obtain parameter sets together with their correlated uncertainty. While we have applied the quasi-linear geostatistical approach as inverse kernel, other methods - such as ensemble Kalman methods - may suit the same purpose, particularly when many data points are to be included. In order to identify 3-D fields, discretized by about 50 million grid points, we use the high-performance-computing framework DUNE to solve the involved partial differential equations on midrange computer cluster. We have quantified the worth of different data types in these inference problems. In practical applications, the constitutive relationships between geophysical, thermal, and hydraulic properties can pose a problem, requiring additional inversion. However, not well constrained transient boundary conditions may put inversion efforts on larger (e.g. regional) scales even more into question. We envision that future hydrogeophysical inversion efforts will target boundary conditions, such as groundwater recharge rates, in conjunction with - or instead of - aquifer parameters. By this, the distinction between data assimilation and parameter estimation will gradually vanish.

  15. Proxies of oceanic Lithosphere/Asthenosphere Boundary from Global Seismic Anisotropy Tomography

    NASA Astrophysics Data System (ADS)

    Burgos, Gael; Montagner, Jean-Paul; Beucler, Eric; Trampert, Jeannot; Capdeville, Yann

    2013-04-01

    Surface waves provide essential information on the knowledge of the upper mantle global structure despite their low lateral resolution. This study, based on surface waves data, presents the development of a new anisotropic tomographic model of the upper mantle, a simplified isotropic model and the consequences of these results for the Lithosphere/Asthenosphere Boundary (LAB). As a first step, a large number of data is collected, these data are merged and regionalized in order to derive maps of phase and group velocity for the fundamental mode of Rayleigh and Love waves and their azimuthal dependence (maps of phase velocity are also obtained for the first six overtones). As a second step, a crustal a posteriori model is developped from the Monte-Carlo inversion of the shorter periods of the dataset, in order to take into account the effect of the shallow layers on the upper mantle. With the crustal model, a first Monte-Carlo inversion for the upper mantle structure is realized in a simplified isotropic parameterization to highlight the influence of the LAB properties on the surface waves data. Still using the crustal model, a first order perturbation theory inversion is performed in a fully anisotropic parameterization to build a 3-D tomographic model of the upper mantle (an extended model until the transition zone is also obtained by using the overtone data). Estimates of the LAB depth are derived from the upper mantle models and compared with the predictions of oceanic lithosphere cooling models. Seismic events are simulated using the Spectral Element Method in order to validate the ability of the anisotropic tomographic model of the upper mantle to re- produce observed seismograms.

  16. Seismic tomographic imaging of P- and S-waves velocity perturbations in the upper mantle beneath Iran

    NASA Astrophysics Data System (ADS)

    Alinaghi, Alireza; Koulakov, Ivan; Thybo, Hans

    2007-06-01

    The inverse tomography method has been used to study the P- and S-waves velocity structure of the crust and upper mantle underneath Iran. The method, based on the principle of source-receiver reciprocity, allows for tomographic studies of regions with sparse distribution of seismic stations if the region has sufficient seismicity. The arrival times of body waves from earthquakes in the study area as reported in the ISC catalogue (1964-1996) at all available epicentral distances are used for calculation of residual arrival times. Prior to inversion we have relocated hypocentres based on a 1-D spherical earth's model taking into account variable crustal thickness and surface topography. During the inversion seismic sources are further relocated simultaneously with the calculation of velocity perturbations. With a series of synthetic tests we demonstrate the power of the algorithm and the data to reconstruct introduced anomalies using the ray paths of the real data set and taking into account the measurement errors and outliers. The velocity anomalies show that the crust and upper mantle beneath the Iranian Plateau comprises a low velocity domain between the Arabian Plate and the Caspian Block. This is in agreement with global tomographic models, and also tectonic models, in which active Iranian plateau is trapped between the stable Turan plate in the north and the Arabian shield in the south. Our results show clear evidence of the mainly aseismic subduction of the oceanic crust of the Oman Sea underneath the Iranian Plateau. However, along the Zagros suture zone, the subduction pattern is more complex than at Makran where the collision of the two plates is highly seismic.

  17. GPS water vapour tomography: preliminary results from the ESCOMPTE field experiment

    NASA Astrophysics Data System (ADS)

    Champollion, C.; Masson, F.; Bouin, M.-N.; Walpersdorf, A.; Doerflinger, E.; Bock, O.; Van Baelen, J.

    2005-03-01

    Water vapour plays a major role in atmospheric processes but remains difficult to quantify due to its high variability in time and space and the sparse set of available measurements. The GPS has proved its capacity to measure the integrated water vapour at zenith with the same accuracy as other methods. Recent studies show that it is possible to quantify the integrated water vapour in the line of sight of the GPS satellite. These observations can be used to study the 3D heterogeneity of the troposphere using tomographic techniques. We develop three-dimensional tomographic software to model the three-dimensional distribution of the tropospheric water vapour from GPS data. First, the tomographic software is validated by simulations based on the realistic ESCOMPTE GPS network configuration. Without a priori information, the absolute value of water vapour is less resolved as opposed to relative horizontal variations. During the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers was operated for 2 weeks within a 20×20-km area around Marseille (southern France). The network extends from sea level to the top of the Etoile chain (˜700 m high). Optimal results have been obtained with time windows of 30-min intervals and input data evaluation every 15 min. The optimal grid for the ESCOMTE geometrical configuration has a horizontal step size of 0.05°×0.05° and 500 m vertical step size. Second, we have compared the results of real data inversions with independent observations. Three inversions have been compared to three successive radiosonde launches and shown to be consistent. A good resolution compared to the a priori information is obtained up to heights of 3000 m. A humidity spike at 4000-m altitude remains unresolved. The reason is probably that the signal is spread homogeneously over the whole network and that such a feature is not resolvable by tomographic techniques. The results of our pure GPS inversion show a correlation with meteorological phenomena. Our measurements could be related to the land-sea breeze. Undoubtedly, tomography has some interesting potential for the water vapour cycle studies at small temporal and spatial scales.

  18. GNSS-ISR data fusion: General framework with application to the high-latitude ionosphere

    NASA Astrophysics Data System (ADS)

    Semeter, Joshua; Hirsch, Michael; Lind, Frank; Coster, Anthea; Erickson, Philip; Pankratius, Victor

    2016-03-01

    A mathematical framework is presented for the fusion of electron density measured by incoherent scatter radar (ISR) and total electron content (TEC) measured using global navigation satellite systems (GNSS). Both measurements are treated as projections of an unknown density field (for GNSS-TEC the projection is tomographic; for ISR the projection is a weighted average over a local spatial region) and discrete inverse theory is applied to obtain a higher fidelity representation of the field than could be obtained from either modality individually. The specific implementation explored herein uses the interpolated ISR density field as initial guess to the combined inverse problem, which is subsequently solved using maximum entropy regularization. Simulations involving a dense meridional network of GNSS receivers near the Poker Flat ISR demonstrate the potential of this approach to resolve sub-beam structure in ISR measurements. Several future directions are outlined, including (1) data fusion using lower level (lag product) ISR data, (2) consideration of the different temporal sampling rates, (3) application of physics-based regularization, (4) consideration of nonoptimal observing geometries, and (5) use of an ISR simulation framework for optimal experiment design.

  19. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  20. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  1. Solving the inverse scattering problem in reflection-mode dynamic speckle-field phase microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhou, Renjie; So, Peter T. C.; Yaqoob, Zahid; Jin, Di; Hosseini, Poorya; Kuang, Cuifang; Singh, Vijay Raj; Kim, Yang-Hyo; Dasari, Ramachandra R.

    2017-02-01

    Most of the quantitative phase microscopy systems are unable to provide depth-resolved information for measuring complex biological structures. Optical diffraction tomography provides a non-trivial solution to it by 3D reconstructing the object with multiple measurements through different ways of realization. Previously, our lab developed a reflection-mode dynamic speckle-field phase microscopy (DSPM) technique, which can be used to perform depth resolved measurements in a single shot. Thus, this system is suitable for measuring dynamics in a layer of interest in the sample. DSPM can be also used for tomographic imaging, which promises to solve the long-existing "missing cone" problem in 3D imaging. However, the 3D imaging theory for this type of system has not been developed in the literature. Recently, we have developed an inverse scattering model to rigorously describe the imaging physics in DSPM. Our model is based on the diffraction tomography theory and the speckle statistics. Using our model, we first precisely calculated the defocus response and the depth resolution in our system. Then, we further calculated the 3D coherence transfer function to link the 3D object structural information with the axially scanned imaging data. From this transfer function, we found that in the reflection mode excellent sectioning effect exists in the low lateral spatial frequency region, thus allowing us to solve the "missing cone" problem. Currently, we are working on using this coherence transfer function to reconstruct layered structures and complex cells.

  2. FOREWORD: Imaging from coupled physics Imaging from coupled physics

    NASA Astrophysics Data System (ADS)

    Arridge, S. R.; Scherzer, O.

    2012-08-01

    Due to the increased demand for tomographic imaging in applied sciences, such as medicine, biology and nondestructive testing, the field has expanded enormously in the past few decades. The common task of tomography is to image the interior of three-dimensional objects from indirect measurement data. In practical realizations, the specimen to be investigated is exposed to probing fields. A variety of these, such as acoustic, electromagnetic or thermal radiation, amongst others, have been advocated in the literature. In all cases, the field is measured after interaction with internal mechanisms of attenuation and/or scattering and images are reconstructed using inverse problems techniques, representing spatial maps of the parameters of these perturbation mechanisms. In the majority of these imaging modalities, either the useful contrast is of low resolution, or high resolution images are obtained with limited contrast or quantitative discriminatory ability. In the last decade, an alternative phenomenon has become of increasing interest, although its origins can be traced much further back; see Widlak and Scherzer [1], Kuchment and Steinhaur [2], and Seo et al [3] in this issue for references to this historical context. Rather than using the same physical field for probing and measurement, with a contrast caused by perturbation, these methods exploit the generation of a secondary physical field which can be measured in addition to, or without, the often dominating effect of the primary probe field. These techniques are variously called 'hybrid imaging' or 'multimodality imaging'. However, in this article and special section we suggest the term 'imaging from coupled physics' (ICP) to more clearly distinguish this methodology from those that simply measure several types of data simultaneously. The key idea is that contrast induced by one type of radiation is read by another kind, so that both high resolution and high contrast are obtained simultaneously. As with all new imaging techniques, the discovery of physical principles which can be exploited to yield information about internal physical parameters has led, hand in hand, to the development of new mathematical methods for solving the corresponding inverse problems. In many cases, the coupled physics imaging problems are expected to be much better posed than conventional tomographical imaging problems. However, still, at the current state of research, there exist a variety of open mathematical questions regarding uniqueness, existence and stability. In this special section we have invited contributions from many of the leading researchers in the mathematics, physics and engineering of these techniques to survey and to elaborate on these novel methodologies, and to present recent research directions. Historically, one of the best studied strongly ill-posed problems in the mathematical literature is the Calderón problem occuring in conductivity imaging, and one of the first examples of ICP is the use of magnetic resonance imaging (MRI) to detect internal current distributions. This topic, known as current density imaging (CDI) or magnetic resonance elecrical impedance tomography (MREIT), and its related technique of magnetic resonance electrical property tomography (MREPT), is reviewed by Wildak and Scherzer [1], and also by Seo et al [3], where experimental studies are documented. Mathematically, several of the ICP problems can be analyzed in terms of the 'p-Laplacian' which raises interesting research questions of non-linear partial differential equations. One approach for analyzing and for the solution of the CDI problem, using characteristics of the 1-Laplacian, is discussed by Tamasan and Veras [4]. Moreover, Moradifam et al [5] present a novel iterative algorithm based on Bregman splitting for solving the CDI problem. Probably the most active research areas in ICP are related to acoustic detection, because most of these techniques rely on the photoacoustic effect wherein absorption of an ultrashort pulse of light, having propagated by multiple scattering some distance into a diffusing medium, generates a source of acoustic waves that are propagated with hyperbolic stability to a surface detector. A complementary problem is that of 'acousto-optics' which uses focussed acoustic waves as the primary field to induce perturbations in optical or electrical properties, which are thus spatially localized. Similar physical principles apply to implement ultrasound modulated electrical impedance tomography (UMEIT). These topics are included in the review of Wildak and Scherzer [1], and Kuchment and Steinhauer [2] offer a general analysis of their structure in terms of pseudo-differential operators. 'Acousto-electrical' imaging is analyzed as a particular case by Ammari et al [6]. In the paper by Tarvainen et al [7], the photo-acoustic problem is studied with respect to different models of the light propagation step. In the paper by Monard and Bal [8], a more general problem for the reconstruction of an anisotropic diffusion parameter from power density measurements is considered; here, issues of uniqueness with respect to the number of measurements is of great importance. A distinctive, and highly important, example of ICP is that of elastography, in which the primary field is low-frequency ultrasound giving rise to mechanical displacement that reveals information on the local elasticity tensor. As in all the methods discussed in this section, this contrast mechanism is measured internally, with a secondary technique, which in this case can be either MRI or ultrasound. McLaughlin et al [9] give a comprehensive analysis of this problem. Our intention for this special section was to provide both an overview and a snapshot of current work in this exciting area. The increasing interest, and the involvement of cross-disciplinary groups of scientists, will continue to lead to the rapid expansion and important new results in this novel area of imaging science. References [1] Widlak T and Scherzer O 2012 Inverse Problems 28 084008 [2] Kuchment P and Steinhauer D 2012 Inverse Problems 28 084007 [3] Seo J K, Kim D-H, Lee J, Kwon O I, Sajib S Z K and Woo E J 2012 Inverse Problems 28 084002 [4] Tamasan A and Veras J 2012 Inverse Problems 28 084006 [5] Moradifam A, Nachman A and Timonov A 2012 Inverse Problems 28 084003 [6] Ammari H, Garnier J and Jing W 2012 Inverse Problems 28 084005 [7] Tarvainen T, Cox B T, Kaipio J P and Arridge S R 2012 Inverse Problems 28 084009 [8] Monard F and Bal G 2012 Inverse Problems 28 084001 [9] McLaughlin J, Oberai A and Yoon J R 2012 Inverse Problems 28 084004

  3. Surface Wave Mode Conversion due to Lateral Heterogeneity and its Impact on Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Datta, A.; Priestley, K. F.; Chapman, C. H.; Roecker, S. W.

    2016-12-01

    Surface wave tomography based on great circle ray theory has certain limitations which become increasingly significant with increasing frequency. One such limitation is the assumption of different surface wave modes propagating independently from source to receiver, valid only in case of smoothly varying media. In the real Earth, strong lateral gradients can cause significant interconversion among modes, thus potentially wreaking havoc with ray theory based tomographic inversions that make use of multimode information. The issue of mode coupling (with either normal modes or surface wave modes) for accurate modelling and inversion of body wave data has received significant attention in the seismological literature, but its impact on inversion of surface waveforms themselves remains much less understood.We present an empirical study with synthetic data, to investigate this problem with a two-fold approach. In the first part, 2D forward modelling using a new finite difference method that allows modelling a single mode at a time, is used to build a general picture of energy transfer among modes as a function of size, strength and sharpness of lateral heterogeneities. In the second part, we use the example of a multimode waveform inversion technique based on the Cara and Leveque (1987) approach of secondary observables, to invert our synthetic data and assess how mode conversion can affect the process of imaging the Earth. We pay special attention to ensuring that any biases or artefacts in the resulting inversions can be unambiguously attributed to mode conversion effects. This study helps pave the way towards the next generation of (non-numerical) surface wave tomography techniques geared to exploit higher frequencies and mode numbers than are typically used today.

  4. Crustal Structure of Indonesia from Seismic Ambient Noise Tomography

    NASA Astrophysics Data System (ADS)

    Saygin, E.; Cummins, P. R.; Suhardjono, S.; Nishida, K.

    2012-12-01

    We image a region spanning from south Vietnam to north Australia using over 300 seismic stations by using ambient seismic noise cross-correlations. The backbone of the network is formed by using the broadband seismograph network of Indonesia with over 160 stations serving as mid-tie point in the region. The retrieved Green's functions from the cross-correlation of continuously recorded seismic ambient noise at the stations are used to perform surface wave dispersion analysis. We apply a multiple filter approach to measure the phase and group velocity dispersion of Rayleigh wave component of Green's functions. The traveltime information derived from the dispersion is then used in a nonlinear tomographic approach to map the velocity perturbation of the region. The forward problem for the tomographic imaging can accurately track the evolution of a wavefront in highly heterogeneous media. Therefore the highly complex velocity distribution of the region is accurately reflected into the forward calculations used in the inversion. In general, accretionary prisms in the region are marked with quite low group and phase velocities with perturbations up to 50%. Active volcanoes in Sumatra and Java islands are also marked with low velocities. Rajang delta in north-west Kalimantan and thick sediments in South China Sea are imaged with low velocities.

  5. Three-dimensional optical tomographic imaging of supersonic jets through inversion of phase data obtained through the transport-of-intensity equation.

    PubMed

    Hemanth, Thayyullathil; Rajesh, Langoju; Padmaram, Renganathan; Vasu, R Mohan; Rajan, Kanjirodan; Patnaik, Lalit M

    2004-07-20

    We report experimental results of quantitative imaging in supersonic circular jets by using a monochromatic light probe. An expanding cone of light interrogates a three-dimensional volume of a supersonic steady-state flow from a circular jet. The distortion caused to the spherical wave by the presence of the jet is determined through our measuring normal intensity transport. A cone-beam tomographic algorithm is used to invert wave-front distortion to changes in refractive index introduced by the flow. The refractive index is converted into density whose cross sections reveal shock and other characteristics of the flow.

  6. Ionospheric-thermospheric UV tomography: 2. Comparison with incoherent scatter radar measurements

    NASA Astrophysics Data System (ADS)

    Dymond, K. F.; Nicholas, A. C.; Budzien, S. A.; Stephan, A. W.; Coker, C.; Hei, M. A.; Groves, K. M.

    2017-03-01

    The Special Sensor Ultraviolet Limb Imager (SSULI) instruments are ultraviolet limb scanning sensors that fly on the Defense Meteorological Satellite Program F16-F19 satellites. The SSULIs cover the 80-170 nm wavelength range which contains emissions at 91 and 136 nm, which are produced by radiative recombination of the ionosphere. We invert the 91.1 nm emission tomographically using a newly developed algorithm that includes optical depth effects due to pure absorption and resonant scattering. We present the details of our approach including how the optimal altitude and along-track sampling were determined and the newly developed approach we are using for regularizing the SSULI tomographic inversions. Finally, we conclude with validations of the SSULI inversions against Advanced Research Project Agency Long-range Tracking and Identification Radar (ALTAIR) incoherent scatter radar measurements and demonstrate excellent agreement between the measurements. As part of this study, we include the effects of pure absorption by O2, N2, and O in the inversions and find that best agreement between the ALTAIR and SSULI measurements is obtained when only O2 and O are included, but the agreement degrades when N2 absorption is included. This suggests that the absorption cross section of N2 needs to be reinvestigated near 91.1 nm wavelengths.

  7. Phillips-Tikhonov regularization with a priori information for neutron emission tomographic reconstruction on Joint European Torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bielecki, J.; Scholz, M.; Drozdowicz, K.

    A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less

  8. Solving large tomographic linear systems: size reduction and error estimation

    NASA Astrophysics Data System (ADS)

    Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust

    2014-10-01

    We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.

  9. Including Short Period Constraints In the Construction of Full Waveform Tomographic Models

    NASA Astrophysics Data System (ADS)

    Roy, C.; Calo, M.; Bodin, T.; Romanowicz, B. A.

    2015-12-01

    Thanks to the introduction of the Spectral Element Method (SEM) in seismology, which allows accurate computation of the seismic wavefield in complex media, the resolution of regional and global tomographic models has improved in recent years. However, due to computational costs, only long period waveforms are considered, and only long wavelength structure can be constrained. Thus, the resulting 3D models are smooth, and only represent a small volumetric perturbation around a smooth reference model that does not include upper-mantle discontinuities (e.g. MLD, LAB). Extending the computations to shorter periods, necessary for the resolution of smaller scale features, is computationally challenging. In order to overcome these limitations and to account for layered structure in the upper mantle in our full waveform tomography, we include information provided by short period seismic observables (receiver functions and surface wave dispersion), sensitive to sharp boundaries and anisotropic structure respectively. In a first step, receiver functions and dispersion curves are used to generate a number of 1D radially anisotropic shear velocity profiles using a trans-dimensional Markov-chain Monte Carlo (MCMC) algorithm. These 1D profiles include both isotropic and anisotropic discontinuities in the upper mantle (above 300 km depth) beneath selected stationsand are then used to build a 3D starting model for the full waveform tomographic inversion. This model is built after 1) interpolation between the available 1D profiles, and 2) homogeneization of the layered 1D models to obtain an equivalent smooth 3D starting model in the period range of interest for waveform inversion. The waveforms used in the inversion are collected for paths contained in the region of study and filtered at periods longer than 40s. We use the spectral element code "RegSEM" (Cupillard et al., 2012) for forward computations and a quasi-Newton inversion approach in which kernels are computed using normal mode perturbation theory. We present here the first reults of such an approach after successive iterations of a full waveform tomography of the North American continent.

  10. Research in Image Understanding as Applied to 3-D Microwave Tomographic Imaging with Near Optical Resolution.

    DTIC Science & Technology

    1987-03-01

    Oct. 1985. 28. D.L. Jaggard, K. Schultz, Y. Kim and P. Frangos , "Inverse Scattering for Dielectric Media", Annual OSA Meeting, Wash. D.C., Oct. 1985...T.H. Chu - Graduate Student (50%) C.Y. Ho - Graduate Student (50%) Y. Kim - Graduate Student (50%) K S. Lee - Graduate Student (50%) P. Frangos ...1982. 3. P. Frangos (Ph.D.) - "One-Dimensional Inverse Scattering: Exact Methods and Applications". 4. C.L. Werner (Ph.D.) - ŗ-D Imaging of Coherent and

  11. Tracking tracer breakthrough in the hyporheic zone using time‐lapse DC resistivity, Crabby Creek, Pennsylvania

    USGS Publications Warehouse

    Nyquist, Jonathan E.; Toran, Laura; Fang, Allison C.; Ryan, Robert J.; Rosenberry, Donald O.

    2010-01-01

    Characterization of the hyporheic zone is of critical importance for understanding stream ecology, contaminant transport, and groundwater‐surface water interaction. A salt water tracer test was used to probe the hyporheic zone of a recently re‐engineered portion of Crabby Creek, a stream located near Philadelphia, PA. The tracer solution was tracked through a 13.5 meter segment of the stream using both a network of 25 wells sampled every 5–15 minutes and time‐lapse electrical resistivity tomographs collected every 11 minutes for six hours, with additional tomographs collected every 100 minutes for an additional 16 hours. The comparison of tracer monitoring methods is of keen interest because tracer tests are one of the few techniques available for characterizing this dynamic zone, and logistically it is far easier to collect resistivity tomographs than to install and monitor a dense network of wells. Our results show that resistivity monitoring captured the essential shape of the breakthrough curve and may indicate portions of the stream where the tracer lingered in the hyporheic zone. Time‐lapse resistivity measurements, however, represent time averages over the period required to collect a tomographic data set, and spatial averages over a volume larger than captured by a well sample. Smoothing by the resistivity data inversion algorithm further blurs the resulting tomograph; consequently resistivity monitoring underestimates the degree of fine‐scale heterogeneity in the hyporheic zone.

  12. Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Cai, Weiwei; Liu, Yingzheng

    2018-04-01

    Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.

  13. Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics.

    PubMed

    Yu, Tao; Cai, Weiwei; Liu, Yingzheng

    2018-04-01

    Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.

  14. Estimating crustal heterogeneity from double-difference tomography

    USGS Publications Warehouse

    Got, J.-L.; Monteiller, V.; Virieux, J.; Okubo, P.

    2006-01-01

    Seismic velocity parameters in limited, but heterogeneous volumes can be inferred using a double-difference tomographic algorithm, but to obtain meaningful results accuracy must be maintained at every step of the computation. MONTEILLER et al. (2005) have devised a double-difference tomographic algorithm that takes full advantage of the accuracy of cross-spectral time-delays of large correlated event sets. This algorithm performs an accurate computation of theoretical travel-time delays in heterogeneous media and applies a suitable inversion scheme based on optimization theory. When applied to Kilauea Volcano, in Hawaii, the double-difference tomography approach shows significant and coherent changes to the velocity model in the well-resolved volumes beneath the Kilauea caldera and the upper east rift. In this paper, we first compare the results obtained using MONTEILLER et al.'s algorithm with those obtained using the classic travel-time tomographic approach. Then, we evaluated the effect of using data series of different accuracies, such as handpicked arrival-time differences ("picking differences"), on the results produced by double-difference tomographic algorithms. We show that picking differences have a non-Gaussian probability density function (pdf). Using a hyperbolic secant pdf instead of a Gaussian pdf allows improvement of the double-difference tomographic result when using picking difference data. We completed our study by investigating the use of spatially discontinuous time-delay data. ?? Birkha??user Verlag, Basel, 2006.

  15. Hyperspectral tomography based on multi-mode absorption spectroscopy (MUMAS)

    NASA Astrophysics Data System (ADS)

    Dai, Jinghang; O'Hagan, Seamus; Liu, Hecong; Cai, Weiwei; Ewart, Paul

    2017-10-01

    This paper demonstrates a hyperspectral tomographic technique that can recover the temperature and concentration field of gas flows based on multi-mode absorption spectroscopy (MUMAS). This method relies on the recently proposed concept of nonlinear tomography, which can take full advantage of the nonlinear dependency of MUMAS signals on temperature and enables 2D spatial resolution of MUMAS which is naturally a line-of-sight technique. The principles of MUMAS and nonlinear tomography, as well as the mathematical formulation of the inversion problem, are introduced. Proof-of-concept numerical demonstrations are presented using representative flame phantoms and assuming typical laser parameters. The results show that faithful reconstruction of temperature distribution is achievable when a signal-to-noise ratio of 20 is assumed. This method can potentially be extended to simultaneously reconstructing distributions of temperature and the concentration of multiple flame species.

  16. 3D tomographic reconstruction using geometrical models

    NASA Astrophysics Data System (ADS)

    Battle, Xavier L.; Cunningham, Gregory S.; Hanson, Kenneth M.

    1997-04-01

    We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than performing operations (segmentation for example) on a reconstructed volume. The inverse problem is posed in the Bayesian framework. A triangulated surface describes the unknown shape and the reconstruction is computed with a maximum a posteriori (MAP) estimate. The adjoint differentiation technique computes the derivatives needed for the optimization of the model parameters. We demonstrate the usefulness of the approach and emphasize the techniques of designing forward and adjoint codes. We use the system response of the University of Arizona Fast SPECT imager to illustrate this method by reconstructing the shape of a heart phantom.

  17. Joint 3-D tomographic imaging of Vp, Vs and Vp/Vs and hypocenter relocation at Sinabung volcano, Indonesia from November to December 2013

    USGS Publications Warehouse

    Nugraha, Andri Dian; Indrastuti, Novianti; Kusnandar, Ridwan; Gunawan, Hendra; McCausland, Wendy A.; Aulia, Atin Nur; Harlianti, Ulvienin

    2018-01-01

    We conducted travel time tomography using P- and S-wave arrival times of volcanic-tectonic (VT) events that occurred between November and December 2013 to determine the three-dimensional (3D) seismic velocity structure (Vp, Vs, and Vp/Vs) beneath Sinabung volcano, Indonesia in order to delineate geological subsurface structure and to enhance our understanding of the volcanism itself. This was a time period when phreatic explosions became phreatomagmatic and then magma migrated to the surface forming a summit lava dome. We used 4846 VT events with 16,138 P- and 16,138 S-wave arrival time phases recorded by 6 stations for the tomographic inversion. The relocated VTs collapse into three clusters at depths from the surface to sea level, from 2 to 4 km below sea level, and from 5 to 8.5 km below sea level. The tomographic inversion results show three prominent regions of high Vp/Vs (~ 1.8) beneath Sinabung volcano at depths consistent with the relocated earthquake clusters. We interpret these anomalies as intrusives associated with previous eruptions and possibly surrounding the magma conduit, which we cannot resolve with this study. One anomalous region might contain partial melt, at sea level and below the eventual eruption site at the summit. Our results are important for the interpretation of a conceptual model of the “plumbing system” of this hazardous volcano.

  18. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  19. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  20. Upper mantle seismic structure beneath southwest Africa from finite-frequency P- and S-wave tomography

    NASA Astrophysics Data System (ADS)

    Youssof, Mohammad; Yuan, Xiaohui; Tilmann, Frederik; Heit, Benjamin; Weber, Michael; Jokat, Wilfried; Geissler, Wolfram; Laske, Gabi; Eken, Tuna; Lushetile, Bufelo

    2015-04-01

    We present a 3D high-resolution seismic model of the southwestern Africa region from teleseismic tomographic inversion of the P- and S- wave data recorded by the amphibious WALPASS network. We used 40 temporary stations in southwestern Africa with records for a period of 2 years (the OBS operated for 1 year), between November 2010 and November 2012. The array covers a surface area of approximately 600 by 1200 km and is located at the intersection of the Walvis Ridge, the continental margin of northern Namibia, and extends into the Congo craton. Major questions that need to be understood are related to the impact of asthenosphere-lithosphere interaction, (plume-related features), on the continental areas and the evolution of the continent-ocean transition that followed the break-up of Gondwana. This process is supposed to leave its imprint as distinct seismic signature in the upper mantle. Utilizing 3D sensitivity kernels, we invert traveltime residuals to image velocity perturbations in the upper mantle down to 1000 km depth. To test the robustness of our tomographic image we employed various resolution tests which allow us to evaluate the extent of smearing effects and help defining the optimum inversion parameters (i.e., damping and smoothness) used during the regularization of inversion process. Resolution assessment procedure includes also a detailed investigation of the effect of the crustal corrections on the final images, which strongly influenced the resolution for the mantle structures. We present detailed tomographic images of the oceanic and continental lithosphere beneath the study area. The fast lithospheric keel of the Congo Craton reaches a depth of ~250 km. Relatively low velocity perturbations have been imaged within the orogenic Damara Belt down to a depth of ~150 km, probably related to surficial suture zones and the presence of fertile material. A shallower depth extent of the lithospheric plate of ~100 km was observed beneath the ocean, consistent with plate-cooling models. In addition to tomographic images, the seismic anisotropy measurements within the upper mantle inferred from teleseismic shear waves indicate a predominant NE-SW orientation for most of the land stations. Current results indicate no evidence for a consistent signature of fossil plume.

  1. Three-Dimensional P-wave Velocity Structure Beneath Long Valley Caldera, California, Using Local-Regional Double-Difference Tomography

    NASA Astrophysics Data System (ADS)

    Menendez, H. M.; Thurber, C. H.

    2011-12-01

    Eastern California's Long Valley Caldera (LVC) and the Mono-Inyo Crater volcanic systems have been active for the past ~3.6 million years. Long Valley is known to produce very large silicic eruptions, the last of which resulted in the formation of a 17 km by 32 km wide, east-west trending caldera. Relatively recent unrest began between 1978-1980 with five ML ≥ 5.7 non-double-couple (NDC) earthquakes and associated aftershock swarms. Similar shallow seismic swarms have continued south of the resurgent dome and beneath Mammoth Mountain, surrounding sites of increased CO2 gas emissions. Nearly two decades of increased volcanic activity led to the 1997 installation of a temporary three-component array of 69 seismometers. This network, deployed by the Durham University, the USGS, and Duke University, recorded over 4,000 high-frequency events from May to September. A local tomographic inversion of 283 events surrounding Mammoth Mountain yielded a velocity structure with low Vp and Vp/Vs anomalies at 2-3 km bsl beneath the resurgent dome and Casa Diablo hot springs. These anomalies were interpreted to be CO2 reservoirs (Foulger et al., 2003). Several teleseismic and regional tomography studies have also imaged low Vp anomalies beneath the caldera at ~5-15 km depth, interpreted to be the underlying magma reservoir (Dawson et al., 1990; Weiland et al., 1995; Thurber et al., 2009). This study aims to improve the resolution of the LVC regional velocity model by performing tomographic inversions using the local events from 1997 in conjunction with regional events recorded by the Northern California Seismic Network (NCSN) between 1980 and 2010 and available refraction data. Initial tomographic inversions reveal a low velocity zone at ~2 to 6 km depth beneath the caldera. This structure may simply represent the caldera fill. Further iterations and the incorporation of teleseismic data may better resolve the overall shape and size of the underlying magma reservoir.

  2. Joint body and surface wave tomography applied to the Toba caldera complex (Indonesia)

    NASA Astrophysics Data System (ADS)

    Jaxybulatov, Kairly; Koulakov, Ivan; Shapiro, Nikolai

    2016-04-01

    We developed a new algorithm for a joint body and surface wave tomography. The algorithm is a modification of the existing LOTOS code (Koulakov, 2009) developed for local earthquake tomography. The input data for the new method are travel times of P and S waves and dispersion curves of Rayleigh and Love waves. The main idea is that the two data types have complementary sensitivities. The body-wave data have good resolution at depth, where we have enough crossing rays between sources and receivers, whereas the surface waves have very good near-surface resolution. The surface wave dispersion curves can be retrieved from the correlations of the ambient seismic noise and in this case the sampled path distribution does not depend on the earthquake sources. The contributions of the two data types to the inversion are controlled by the weighting of the respective equations. One of the clearest cases where such approach may be useful are volcanic systems in subduction zones with their complex magmatic feeding systems that have deep roots in the mantle and intermediate magma chambers in the crust. In these areas, the joint inversion of different types of data helps us to build a comprehensive understanding of the entire system. We apply our algorithm to data collected in the region surrounding the Toba caldera complex (north Sumatra, Indonesia) during two temporary seismic experiments (IRIS, PASSCAL, 1995, GFZ, LAKE TOBA, 2008). We invert 6644 P and 5240 S wave arrivals and ~500 group velocity dispersion curves of Rayleigh and Love waves. We present a series of synthetic tests and real data inversions which show that joint inversion approach gives more reliable results than the separate inversion of two data types. Koulakov, I., LOTOS code for local earthquake tomographic inversion. Benchmarks for testing tomographic algorithms, Bull. seism. Soc. Am., 99(1), 194-214, 2009, doi:10.1785/0120080013

  3. Rayleigh-wave tomography of the Ontong-Java Plateau

    NASA Astrophysics Data System (ADS)

    Richardson, W. Philip; Okal, Emile A.; Van der Lee, Suzan

    2000-02-01

    The deep structure of the Ontong-Java Plateau (OJP) in the westcentral Pacific is investigated through a 2-year deployment of four PASSCAL seismic stations used in a passive tomographic experiment. Single-path inversions of 230 Rayleigh waveforms from 140 earthquakes mainly located in the Solomon Trench confirm the presence of an extremely thick crust, with an average depth to the Mohorovičić discontinuity of 33 km. The thickest crusts (38 km) are found in the southcentral part of the plateau, around 2°S, 157°E. Lesser values remaining much thicker than average oceanic crust (15-26 km) are found on either side of the main structure, suggesting that the OJP spills over into the Lyra Basin to the west. Such thick crustal structures are consistent with formation of the plateau at the Pacific-Phoenix ridge at 121 Ma, while its easternmost part may have formed later (90 Ma) on more mature lithosphere. Single-path inversions also reveal a strongly developed low-velocity zone at asthenospheric depths in the mantle. A three-dimensional tomographic inversion resolves a low-velocity root of the OJP extending as deep as 300 km, with shear velocity deficiencies of ˜5%, suggesting the presence of a keel, dragged along with the plateau as the latter moves as part of the drift of the Pacific plate over the mantle.

  4. Tomographic imaging of subducted lithosphere below northwest Pacific island arcs

    USGS Publications Warehouse

    Van Der Hilst, R.; Engdahl, R.; Spakman, W.; Nolet, G.

    1991-01-01

    The seismic tomography problem does not have a unique solution, and published tomographic images have been equivocal with regard to the deep structure of subducting slabs. An improved tomographic method, using a more realistic background Earth model and surf ace-reflected as well as direct seismic phases, shows that slabs beneath the Japan and Izu Bonin island arcs are deflected at the boundary between upper and lower mantle, whereas those beneath the northern Kuril and Mariana arcs sink into the lower mantle.

  5. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  6. TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data.

    PubMed

    Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; van de Kamp, Thomas; dos Santos Rolo, Tomy; Xiao, Xianghui; Moosmann, Julian; Kashef, Jubin; Stotzka, Rainer

    2015-03-09

    High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration of in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.

  7. TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data

    DOE PAGES

    Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; ...

    2015-01-01

    High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration o f in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce themore » number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.« less

  8. ECAT: A New Computerized Tomographic Imaging System for Position-Emitting Radiopharmaceuticals

    DOE R&D Accomplishments Database

    Phelps, M. E.; Hoffman, E. J.; Huang, S. C.; Kuhl, D. E.

    1977-01-01

    The ECAT was designed and developed as a complete computerized positron radionuclide imaging system capable of providing high contrast, high resolution, quantitative images in 2 dimensional and tomographic formats. Flexibility, in its various image mode options, allows it to be used for a wide variety of imaging problems.

  9. A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy

    PubMed Central

    Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.

    2017-01-01

    One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997

  10. Characterization of an alluvial aquifer with thermal tracer tomography

    NASA Astrophysics Data System (ADS)

    Somogyvári, Márk; Bayer, Peter

    2017-04-01

    In the summer of 2015, a series of thermal tracer tests was performed at the Widen field site in northeast Switzerland. At this site numerous hydraulic, tracer, geophysical and hydrogeophysical field tests have been conducted in the past to investigate a shallow alluvial aquifer. The goals of the campaign in 2015 were to design a cost-effective thermal tracer tomography setup and to validate the concept of travel time-based thermal tracer tomography under field conditions. Thermal tracer tomography uses repeated thermal tracer injections with different injection depths and distributed temperature measurements to map the hydraulic conductivity distribution of a heterogeneous aquifer. The tracer application was designed with minimal experimental time and cost. Water was heated in inflatable swimming pools using direct sunlight of the warm summer days, and it was injected as low temperature pulses in a well. Because of the small amount of injected heat, no long recovery times were required between the repeated heat tracer injections and every test started from natural thermal conditions. At Widen, four thermal tracer tests were performed during a period of three days. Temperatures were measured in one downgradient well using a distributed temperature measurement system installed at seven depth points. Totally 12 temperature breakthrough curves were collected. Travel time based tomographic inversion assumes that thermal transport is dominated by advection and the travel time of the thermal tracer can be related to the hydraulic conductivities of the aquifer. This assumption is valid in many shallow porous aquifers where the groundwater flow is fast. In our application, the travel time problem was treated by a tomographic solver, analogous to seismic tomography, to derive the hydraulic conductivity distribution. At the test site, a two-dimensional cross-well hydraulic conductivity profile was reconstructed with the travel time based inversion. The reconstructed profile corresponds well with the findings of the earlier hydraulic and geophysical experiments at the site.

  11. Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Ryan, J.

    2010-12-01

    Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a realistic number of stations and events needed to provide the data.

  12. SSULI/SSUSI UV Tomographic Images of Large-Scale Plasma Structuring

    NASA Astrophysics Data System (ADS)

    Hei, M. A.; Budzien, S. A.; Dymond, K.; Paxton, L. J.; Schaefer, R. K.; Groves, K. M.

    2015-12-01

    We present a new technique that creates tomographic reconstructions of atmospheric ultraviolet emission based on data from the Special Sensor Ultraviolet Limb Imager (SSULI) and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI), both flown on the Defense Meteorological Satellite Program (DMSP) Block 5D3 series satellites. Until now, the data from these two instruments have been used independently of each other. The new algorithm combines SSULI/SSUSI measurements of 135.6 nm emission using the tomographic technique; the resultant data product - whole-orbit reconstructions of atmospheric volume emission within the satellite orbital plane - is substantially improved over the original data sets. Tests using simulated atmospheric emission verify that the algorithm performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. A comparison with ALTAIR radar data validates that the volume emission reconstructions can be inverted to yield maps of electron density. The algorithm incorporates several innovative new features, including the use of both SSULI and SSUSI data to create tomographic reconstructions, the use of an inversion algorithm (Richardson-Lucy; RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, and a pseudo-diffusion based regularization scheme implemented between iterations of the RL code. The algorithm also explicitly accounts for extinction due to absorption by molecular oxygen.

  13. Three-Dimensional Characterization of Buried Metallic Targets via a Tomographic Algorithm Applied to GPR Synthetic Data

    NASA Astrophysics Data System (ADS)

    Comite, Davide; Galli, Alessandro; Catapano, Ilaria; Soldovieri, Francesco; Pettinelli, Elena

    2013-04-01

    This work is focused on the three-dimensional (3-D) imaging of buried metallic targets achievable by processing GPR (ground penetrating radar) simulation data via a tomographic inversion algorithm. The direct scattering problem has been analysed by means of a recently-developed numerical setup based on an electromagnetic time-domain CAD tool (CST Microwave Studio), which enables us to efficiently explore different GPR scenarios of interest [1]. The investigated 3D domain considers here two media, representing, e.g., an air/soil environment in which variously-shaped metallic (PEC) scatterers can be buried. The GPR system is simulated with Tx/Rx antennas placed in a bistatic configuration at the soil interface. In the implementation, the characteristics of the antennas may suitably be chosen in terms of topology, offset, radiative features, frequency ranges, etc. Arbitrary time-domain waveforms can be used as the input GPR signal (e.g., a Gaussian-like pulse having the frequency spectrum in the microwave range). The gathered signal at the output port includes the backscattered wave from the objects to be reconstructed, and the relevant data may be displayed in canonical radargram forms [1]. The GPR system sweeps along one main rectilinear direction, and the scanning process is here repeated along different close parallel lines to acquire data for a full 3-D analysis. Starting from the processing of the synthetic GPR data, a microwave tomographic approach is used to tackle the imaging, which is based on the Kirchhoff approximation to linearize the inverse scattering problem [2]. The target reconstruction is given in terms of the amplitude of the 'object function' (normalized with respect to its maximum inside the 3-D investigation domain). The data of the scattered field are collected considering a multi-frequency step process inside the fixed range of the signal spectrum, under a multi-bistatic configuration where the Tx and Rx antennas are separated by an offset distance and move at the interface over rectilinear observation domains. Analyses have been performed for some canonical scatterer shapes (e.g., sphere and cylinder, cube and parallelepiped, cone and wedge) in order to specifically highlight the influence of all the three dimensions (length, depth, and width) in the reconstruction of the targets. The roles of both size and location of the objects are also addressed in terms of the probing signal wavelengths and of the antenna offset. The results show to what extent it is possible to achieve a correct spatial localization of the targets, in conjunction with a generally satisfactory prediction of their 3-D size and shape. It should anyway be noted that the tomographic reconstructions here manage challenging cases of non-penetrable objects with data gathered under a reflection configuration, hence most of the information achievable is expected relating to the upper illuminated parts of the reflectors that give rise to the main scattering effects. The limits in the identification of fine geometrical details are discussed further in connection with the critical aspects of GPR operation, which include the adopted detection configuration and the frequency spectrum of the employed signals. [1] G. Valerio, A. Galli, P. M. Barone, S. E. Lauro, E. Mattei, and E. Pettinelli, "GPR detectability of rocks in a Martian-like shallow subsoil: a numerical approach," Planet. Space Sci., Vol. 62, pp. 31-40, 2012. [2] R. Solimene, A. Buonanno, F. Soldovieri, and R. Pierri, "Physical optics imaging of 3D PEC objects: vector and multipolarized approaches," IEEE Trans. Geosci. Remote Sens., Vol. 48, pp. 1799-1808, Apr. 2010.

  14. New Insights into Tectonics of the Saint Elias, Alaska, Region Based on Local Seismicity and Tomography

    NASA Astrophysics Data System (ADS)

    Ruppert, N. A.; Zabelina, I.; Freymueller, J. T.

    2013-12-01

    Saint Elias Mountains in southern Alaska are manifestation of ongoing tectonic processes that include collision of the Yakutat block with and subduction of the Yakutat block and Pacific plate under the North American plate. Interaction of these tectonic blocks and plates is complex and not well understood. In 2005 and 2006 a network of 22 broadband seismic sites was installed in the region as part of the SainT Elias TEctonics and Erosion Project (STEEP), a five-year multi-disciplinary study that addressed evolution of the highest coastal mountain range on Earth. High quality seismic data provides unique insights into earthquake occurrence and velocity structure of the region. Local earthquake data recorded between 2005 and 2010 became a foundation for detailed study of seismotectonic features and crustal velocities. The highest concentration of seismicity follows the Chugach-St.Elias fault, a major on land tectonic structure in the region. This fault is also delineated in tomographic images as a distinct contrast between lower velocities to the south and higher velocities to the north. The low-velocity region corresponds to the rapidly-uplifted and exhumed sediments on the south side of the range. Earthquake source parameters indicate high degree of compression and undertrusting processes along the coastal area, consistent with multiple thrust structures mapped from geological studies in the region. Tomographic inversion reveals velocity anomalies that correlate with sedimentary basins, volcanic features and subducting Yakutat block. We will present precise earthquake locations and source parameters recorded with the STEEP and regional seismic network along with the results of P- and S-wave tomographic inversion.

  15. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  16. 1r2dinv: A finite-difference model for inverse analysis of two dimensional linear or radial groundwater flow

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Butler, J.J.

    2001-01-01

    We have developed a program for inverse analysis of two-dimensional linear or radial groundwater flow problems. The program, 1r2dinv, uses standard finite difference techniques to solve the groundwater flow equation for a horizontal or vertical plane with heterogeneous properties. In radial mode, the program simulates flow to a well in a vertical plane, transforming the radial flow equation into an equivalent problem in Cartesian coordinates. The physical parameters in the model are horizontal or x-direction hydraulic conductivity, anisotropy ratio (vertical to horizontal conductivity in a vertical model, y-direction to x-direction in a horizontal model), and specific storage. The program allows the user to specify arbitrary and independent zonations of these three parameters and also to specify which zonal parameter values are known and which are unknown. The Levenberg-Marquardt algorithm is used to estimate parameters from observed head values. Particularly powerful features of the program are the ability to perform simultaneous analysis of heads from different tests and the inclusion of the wellbore in the radial mode. These capabilities allow the program to be used for analysis of suites of well tests, such as multilevel slug tests or pumping tests in a tomographic format. The combination of information from tests stressing different vertical levels in an aquifer provides the means for accurately estimating vertical variations in conductivity, a factor profoundly influencing contaminant transport in the subsurface. ?? 2001 Elsevier Science Ltd. All rights reserved.

  17. GPS tomographic experiment on water vapour dynamics in the troposphere over Lisbon

    NASA Astrophysics Data System (ADS)

    Benevides, Pedro; Catalao, Joao; Miranda, Pedro

    2015-04-01

    Quantification of the water vapour variability on the atmosphere remains a difficult task, affecting the weather prediction. Coarse water vapour resolution measurements in space and time affect the numerical weather prediction solution models causing artifacts in the prediction of severe weather phenomena. The GNSS atmospheric processing has been developed in the past years providing integrated water vapour estimates comparable with the meteorological sensor measurements, with studies registering 1 to 2 kg/m2 bias, but lack a vertical determination of the atmospheric processes. The GNSS tomography in the troposphere is one of the most promising techniques for sensing the three-dimensional water vapour state of the atmosphere. The determination of the integrated water vapour profile by means of the widely accepted GNSS meteorology techniques, allows the reconstruction of several slant path delay rays in the satellite line of view, providing an opportunity to sense the troposphere at tree-dimensions plus time. The tomographic system can estimate an image solution of the water vapour but impositions have to be introduced to the system of equations inversion because of the non-optimal GNSS observation geometry. Application of this technique on atmospheric processes like large convective precipitation or mesoscale water vapour circulation have been able to describe its local dynamic vertical variation. A 3D tomographic experiment was developed over an area of 60x60 km2 around Lisbon (Portugal). The GNSS network available composed by 9 receivers was used for an experiment of densification of the permanent network using 8 temporarily installed GPS receivers (totalling 17 stations). This study was performed during several weeks in July 2013, where a radiosonde campaign was also held in order to validate the tomographic inversion solution. 2D integrated water vapour maps directly obtained from the GNSS processing were also evaluated and local coastal breeze circulation patterns were identified. Preliminary results show good agreement between radiosonde vertical profiles of water vapour and the correspondent grid columnar profile of the tomographic solution. This study aims for a preliminary characterization of the 3D water vapour field over this region, investigating its potential for monitor small scale air circulation on coastal areas like sea breeze meteorological phenomenon. This study was funded by the Portuguese Science Foundation FCT, under project SMOG PTDC/CTE-ATM/119922/2010 and PhD grant SFRH/BD/80288/2011.

  18. Tomographic diagnostics of nonthermal plasmas

    NASA Astrophysics Data System (ADS)

    Denisova, Natalia

    2009-10-01

    In the previous work [1], we discussed a ``technology'' of tomographic method and relations between the tomographic diagnostics in thermal (equilibrium) and nonthermal (nonequilibrium) plasma sources. The conclusion has been made that tomographic reconstruction in thermal plasma sources is the standard procedure at present, which can provide much useful information on the plasma structure and its evolution in time, while the tomographic reconstruction of nonthermal plasma has a great potential at making a contribution to understanding the fundamental problem of substance behavior in strongly nonequilibrium conditions. Using medical terminology, one could say, that tomographic diagnostics of the equilibrium plasma sources studies their ``anatomic'' structure, while reconstruction of the nonequilibrium plasma is similar to the ``physiological'' examination: it is directed to study the physical mechanisms and processes. The present work is focused on nonthermal plasma research. The tomographic diagnostics is directed to study spatial structures formed in the gas discharge plasmas under the influence of electrical and gravitational fields. The ways of plasma ``self-organization'' in changing and extreme conditions are analyzed. The analysis has been made using some examples from our practical tomographic diagnostics of nonthermal plasma sources, such as low-pressure capacitive and inductive discharges. [0pt] [1] Denisova N. Plasma diagnostics using computed tomography method // IEEE Trans. Plasma Sci. 2009 37 4 502.

  19. Eikonal-Based Inversion of GPR Data from the Vaucluse Karst Aquifer

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; van Vorst, D.; Guglielmi, Y.; Cappa, F.; Gaffet, S.

    2009-12-01

    In this paper, we present an easy-to-implement eikonal-based travel time inversion algorithm and apply it to borehole GPR measurement data obtained from a karst aquifer located in the Vaucluse in Provence. The boreholes are situated with a fault zone deep inside the aquifer, in the Laboratoire Souterrain à Bas Bruit (LSBB). The measurements were made using 250 MHz MALA RAMAC borehole GPR antennas. The inversion formulation is unique in its application of a fast-sweeping eikonal solver (Zhao [1]) to the minimization of an objective functional that is composed of a travel time misfit and a model-based regularization [2]. The solver is robust in the presence of large velocity contrasts, efficient, easy to implement, and does not require the use of a sorting algorithm. The computation of sensitivities, which are required for the inversion process, is achieved by tracing rays backward from receiver to source following the gradient of the travel time field [2]. A user wishing to implement this algorithm can opt to avoid the ray tracing step and simply perturb the model to obtain the required sensitivities. Despite the obvious computational inefficiency of such an approach, it is acceptable for 2D problems. The relationship between travel time and the velocity profile is non-linear, requiring an iterative approach to be used. At each iteration, a set of matrix equations is solved to determine the model update. As the inversion continues, the weighting of the regularization parameter is adjusted until an appropriate data misfit is obtained. The inversion results, shown in the attached image, are consistent with previously obtained geological structure. Future work will look at improving inversion resolution and incorporating other measurement methodologies, with the goal of providing useful data for groundwater analysis. References: [1] H. Zhao, “A fast sweeping method for Eikonal equations,” Mathematics of Computation, vol. 74, no. 250, pp. 603-627, 2004. [2] D. Aldridge and D. Oldenburg, “Two-dimensional tomographic inversion with finite-difference traveltimes,” Journal of Seismic Exploration, vol. 2, pp. 257-274, 1993. Recovered Permittivity Profiles

  20. Finite-frequency tomography using adjoint methods-Methodology and examples using membrane surface waves

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Tromp, Jeroen

    2007-03-01

    We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.

  1. The auroral 6300 A emission - Observations and modeling

    NASA Technical Reports Server (NTRS)

    Solomon, Stanley C.; Hays, Paul B.; Abreu, Vincent J.

    1988-01-01

    A tomographic inversion is used to analyze measurements of the auroral atomic oxygen emission line at 6300 A made by the atmosphere explorer visible airglow experiment. A comparison is made between emission altitude profiles and the results from an electron transport and chemical reaction model. Measurements of the energetic electron flux, neutral composition, ion composition, and electron density are incorporated in the model.

  2. Tomographic Imaging of a Forested Area By Airborne Multi-Baseline P-Band SAR.

    PubMed

    Frey, Othmar; Morsdorf, Felix; Meier, Erich

    2008-09-24

    In recent years, various attempts have been undertaken to obtain information about the structure of forested areas from multi-baseline synthetic aperture radar data. Tomographic processing of such data has been demonstrated for airborne L-band data but the quality of the focused tomographic images is limited by several factors. In particular, the common Fourierbased focusing methods are susceptible to irregular and sparse sampling, two problems, that are unavoidable in case of multi-pass, multi-baseline SAR data acquired by an airborne system. In this paper, a tomographic focusing method based on the time-domain back-projection algorithm is proposed, which maintains the geometric relationship between the original sensor positions and the imaged target and is therefore able to cope with irregular sampling without introducing any approximations with respect to the geometry. The tomographic focusing quality is assessed by analysing the impulse response of simulated point targets and an in-scene corner reflector. And, in particular, several tomographic slices of a volume representing a forested area are given. The respective P-band tomographic data set consisting of eleven flight tracks has been acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR).

  3. Time-dependent seismic tomography

    USGS Publications Warehouse

    Julian, B.R.; Foulger, G.R.

    2010-01-01

    Of methods for measuring temporal changes in seismic-wave speeds in the Earth, seismic tomography is among those that offer the highest spatial resolution. 3-D tomographic methods are commonly applied in this context by inverting seismic wave arrival time data sets from different epochs independently and assuming that differences in the derived structures represent real temporal variations. This assumption is dangerous because the results of independent inversions would differ even if the structure in the Earth did not change, due to observational errors and differences in the seismic ray distributions. The latter effect may be especially severe when data sets include earthquake swarms or aftershock sequences, and may produce the appearance of correlation between structural changes and seismicity when the wave speeds are actually temporally invariant. A better approach, which makes it possible to assess what changes are truly required by the data, is to invert multiple data sets simultaneously, minimizing the difference between models for different epochs as well as the rms arrival-time residuals. This problem leads, in the case of two epochs, to a system of normal equations whose order is twice as great as for a single epoch. The direct solution of this system would require twice as much memory and four times as much computational effort as would independent inversions. We present an algorithm, tomo4d, that takes advantage of the structure and sparseness of the system to obtain the solution with essentially no more effort than independent inversions require. No claim to original US government works Journal compilation ?? 2010 RAS.

  4. Whole-mantle P-wave velocity structure and azimuthal anisotropy

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Zhao, D.

    2009-12-01

    There are some hotspot volcanoes on Earth, such as Hawaii and Iceland. The mantle plume hypothesis was proposed forty years ago to explain hotspot volcanoes (e.g., Wilson, 1963; Morgan, 1971). Seismic tomography is a powerful technique to detect mantle plumes and determine their detailed structures. We determined a new whole-mantle 3-D P-wave velocity model (Tohoku model) using a global tomography method (Zhao, 2004, 2009). A flexible-grid approach with a grid interval of ~200 km is adopted to conduct the tomographic inversion. Our model shows that low-velocity (low-V) anomalies with diameters of several hundreds of kilometers are visible from the core-mantle boundary (CMB) to the surface under the major hotspot regions. Under South Pacific where several hotspots including Tahiti exist, there is a huge low-V anomaly from the CMB to the surface. This feature is consistent with the previous models. We conducted extensive resolution tests in order to understand whether this low-V anomaly shows a single superplume or a plume cluster. Unfortunately this problem is still not resolved because the ray path coverage in the mantle under South Pacific is not good enough. A network of ocean bottom seismometers is necessary to solve this problem. To better understand the whole-mantle structure and dynamics, we also conducted P-wave tomographic inversions for the 3-D velocity structure and azimuthal anisotropy. At each grid node there are three unknown parameters: one represents the isotropic velocity, the other two represent the azimuthal anisotropy. Our results show that in the shallow part of the mantle (< ~200 km depth) the fast velocity direction (FVD) is almost the same as the plate motion direction. For example, the FVD in the western Pacific is NWW-SEE, which is normal to the Japan trench axis. In the Tonga subduction zone, the FVD is also perpendicular to the trench axis. Under the Tibetan region the FVD is NE-SW, which is parallel to the direction of the India-Asia collision. In the deeper part of the upper mantle and in the lower mantle, the amplitude of anisotropy is reduced. One interesting feature is that the FVD aligns in a radiated fashion centered in the South-Central Pacific at the bottom of the mantle, which may reflect the mantle upwelling of the Pacific superplume as well as the Hawaiian plume.

  5. FIRST HIGH RESOLUTION 3D VELOCITY STRUCTURE OF THE VOLCANIC TENERIFE ISLAND (CANARY ISLANDS, SPAIN)

    NASA Astrophysics Data System (ADS)

    García-Yeguas, A.; Ibáñez, J.; Koulakov, I.; Sallares, V.

    2009-12-01

    A 3D detailed velocity model of the Tenerife Island has been obtained for first time using high resolution traveltime seismic tomography. Tenerife is a volcanic island (Canary Island, Spain) located in the Atlantic Ocean. In this island is situated the Teide stratovolcano (3718 m high) that is part of the Cañadas-Teide-Pico Viejo volcanic complex. Las Cañadas is a caldera system more than 20 kilometers wide where at least four distinct caldera processes have been identified.In January 2007, a seismic active experiment was carried out as part of the TOM-TEIDEVS project. 6850 air gun shots were fired on the sea and recorded on a dense local seismic land network consisting of 150 independent (three component) seismic stations. The good quality of the recorded data allowed identifying P-wave arrivals up to offsets of 30-40 km obtaining more than 63000 traveltimes used in the tomographic inversion. Two different codes were used in the tomographic inversion, FAST and ATOM_3D, to validate the final 3D velocity models. The main difference between them consists in the ray tracing methods used in the forward modeling, finite differences and ray bending algorithms, respectively. The velocity models show a very heterogeneous upper crust that is usual in similar volcanic environment. The tomographic images points out the no-existence of a magmatic chamber near to the surface. The ancient Las Cañadas caldera borders are clearly imaged featuring relatively high seismic velocity. Several resolution and accuracy test were carried out to quantify the reliability of the final velocity models. Checkerboard tests show that the well-resolved are located up to 6-8 km depth. We also carried out synthetic test in which we succesfully reproduce single anomalies observed in the velocity models.The uncertainties associated to the inverse problem were studied by means of a Monte Carlo-type analysis. The analysis proceeded inverting N random velocity models with random errors (velocity and traveltimes assuming the equiprobability of all of them). These tests assure the uniqueness of the first 3D velocity model that characterizes the internal structure of the Tenerife Island. As main conclusions of our work we can remark: a) This is the first 3-D velocity image of the area; b) we have observed low velocity anomalies near to surface that could be associated to the presence of magma, water reservoirs and volcanic landslides; c) high velocity anomalies could be related to ancient volcanic episodes or basement structures; d) our results could help to resolve many questions relate to the evolution of the volcanic system, as the presence or not of big landslides, calderic explosions or others; e) this image is a very important tool to improve the knowledge of the volcanic hazard, and therefore volcanic risk.

  6. Feasibility of track-based multiple scattering tomography

    NASA Astrophysics Data System (ADS)

    Jansen, H.; Schütze, P.

    2018-04-01

    We present a tomographic technique making use of a gigaelectronvolt electron beam for the determination of the material budget distribution of centimeter-sized objects by means of simulations and measurements. In both cases, the trajectory of electrons traversing a sample under test is reconstructed using a pixel beam-telescope. The width of the deflection angle distribution of electrons undergoing multiple Coulomb scattering at the sample is estimated. Basing the sinogram on position-resolved estimators enables the reconstruction of the original sample using an inverse radon transform. We exemplify the feasibility of this tomographic technique via simulations of two structured cubes—made of aluminium and lead—and via an in-beam measured coaxial adapter. The simulations yield images with FWHM edge resolutions of (177 ± 13) μm and a contrast-to-noise ratio of 5.6 ± 0.2 (7.8 ± 0.3) for aluminium (lead) compared to air. The tomographic reconstruction of a coaxial adapter serves as experimental evidence of the technique and yields a contrast-to-noise ratio of 15.3 ± 1.0 and a FWHM edge resolution of (117 ± 4) μm.

  7. Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.

    2008-12-01

    To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.

  8. Improvements of Travel-time Tomography Models from Joint Inversion of Multi-channel and Wide-angle Seismic Data

    NASA Astrophysics Data System (ADS)

    Begović, Slaven; Ranero, César; Sallarès, Valentí; Meléndez, Adrià; Grevemeyer, Ingo

    2016-04-01

    Commonly multichannel seismic reflection (MCS) and wide-angle seismic (WAS) data are modeled and interpreted with different approaches. Conventional travel-time tomography models using solely WAS data lack the resolution to define the model properties and, particularly, the geometry of geologic boundaries (reflectors) with the required accuracy, specially in the shallow complex upper geological layers. We plan to mitigate this issue by combining these two different data sets, specifically taking advantage of the high redundancy of multichannel seismic (MCS) data, integrated with wide-angle seismic (WAS) data into a common inversion scheme to obtain higher-resolution velocity models (Vp), decrease Vp uncertainty and improve the geometry of reflectors. To do so, we have adapted the tomo2d and tomo3d joint refraction and reflection travel time tomography codes (Korenaga et al, 2000; Meléndez et al, 2015) to deal with streamer data and MCS acquisition geometries. The scheme results in a joint travel-time tomographic inversion based on integrated travel-time information from refracted and reflected phases from WAS data and reflected identified in the MCS common depth point (CDP) or shot gathers. To illustrate the advantages of a common inversion approach we have compared the modeling results for synthetic data sets using two different travel-time inversion strategies: We have produced seismic velocity models and reflector geometries following typical refraction and reflection travel-time tomographic strategy modeling just WAS data with a typical acquisition geometry (one OBS each 10 km). Second, we performed joint inversion of two types of seismic data sets, integrating two coincident data sets consisting of MCS data collected with a 8 km-long streamer and the WAS data into a common inversion scheme. Our synthetic results of the joint inversion indicate a 5-10 times smaller ray travel-time misfit in the deeper parts of the model, compared to models obtained using just wide-angle seismic data. As expected, there is an important improvement in the definition of the reflector geometry, which in turn, allows to improve the accuracy of the velocity retrieval just above and below the reflector. To test the joint inversion approach with real data, we combined wide-angle (WAS) seismic and coincident multichannel seismic reflection (MCS) data acquired in the northern Chile subduction zone into a common inversion scheme to obtain a higher-resolution information of upper plate and inter-plate boundary.

  9. The shifting zoom: new possibilities for inverse scattering on electrically large domains

    NASA Astrophysics Data System (ADS)

    Persico, Raffaele; Ludeno, Giovanni; Soldovieri, Francesco; De Coster, Alberic; Lambot, Sebastien

    2017-04-01

    Inverse scattering is a subject of great interest in diagnostic problems, which are in their turn of interest for many applicative problems as investigation of cultural heritage, characterization of foundations or subservices, identification of unexploded ordnances and so on [1-4]. In particular, GPR data are usually focused by means of migration algorithms, essentially based on a linear approximation of the scattering phenomenon. Migration algorithms are popular because they are computationally efficient and do not require the inversion of a matrix, neither the calculation of the elements of a matrix. In fact, they are essentially based on the adjoint of the linearised scattering operator, which allows in the end to write the inversion formula as a suitably weighted integral of the data [5]. In particular, this makes a migration algorithm more suitable than a linear microwave tomography inversion algorithm for the reconstruction of an electrically large investigation domain. However, this computational challenge can be overcome by making use of investigation domains joined side by side, as proposed e.g. in ref. [3]. This allows to apply a microwave tomography algorithm even to large investigation domains. However, the joining side by side of sequential investigation domains introduces a problem of limited (and asymmetric) maximum view angle with regard to the targets occurring close to the edges between two adjacent domains, or possibly crossing these edges. The shifting zoom is a method that allows to overcome this difficulty by means of overlapped investigation and observation domains [6-7]. It requires more sequential inversion with respect to adjacent investigation domains, but the really required extra-time is minimal because the matrix to be inverted is calculated ones and for all, as well as its singular value decomposition: what is repeated more time is only a fast matrix-vector multiplication. References [1] M. Pieraccini, L. Noferini, D. Mecatti, C. Atzeni, R. Persico, F. Soldovieri, Advanced Processing Techniques for Step-frequency Continuous-Wave Penetrating Radar: the Case Study of "Palazzo Vecchio" Walls (Firenze, Italy), Research on Nondestructive Evaluation, vol. 17, pp. 71-83, 2006. [2] N. Masini, R. Persico, E. Rizzo, A. Calia, M. T. Giannotta, G. Quarta, A. Pagliuca, "Integrated Techniques for Analysis and Monitoring of Historical Monuments: the case of S.Giovanni al Sepolcro in Brindisi (Southern Italy)." Near Surface Geophysics, vol. 8 (5), pp. 423-432, 2010. [3] E. Pettinelli, A. Di Matteo, E. Mattei, L. Crocco, F. Soldovieri, J. D. Redman, and A. P. Annan, "GPR response from buried pipes: Measurement on field site and tomographic reconstructions", IEEE Transactions on Geoscience and Remote Sensing, vol. 47, n. 8, 2639-2645, Aug. 2009. [4] O. Lopera, E. C. Slob, N. Milisavljevic and S. Lambot, "Filtering soil surface and antenna effects from GPR data to enhance landmine detection", IEEE Transactions on Geoscience and Remote Sensing, vol. 45, n. 3, pp.707-717, 2007. [5] R. Persico, "Introduction to Ground Penetrating Radar: Inverse Scattering and Data Processing". Wiley, 2014. [6] R. Persico, J. Sala, "The problem of the investigation domain subdivision in 2D linear inversions for large scale GPR data", IEEE Geoscience and Remote Sensing Letters, vol. 11, n. 7, pp. 1215-1219, doi 10.1109/LGRS.2013.2290008, July 2014. [7] R. Persico, F. Soldovieri, S. Lambot, Shifting zoom in 2D linear inversions performed on GPR data gathered along an electrically large investigation domain, Proc. 16th International Conference on Ground Penetrating Radar GPR2016, Honk-Kong, June 13-16, 2016

  10. SU-E-J-174: Adaptive PET-Based Dose Painting with Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darwish, N; Mackie, T; Thomadsen, B

    2014-06-01

    Purpose: PET imaging can be converted into dose prescription directly. Due to the variability of the intensity of PET the image, PET prescription maybe superior over uniform dose prescription. Furthermore, unlike the case in image reconstruction of not knowing the image solution in advance, the prescribed dose is known from a PET image a priori. Therefore, optimum beam orientations are derivable. Methods: We can assume the PET image to be the prescribed dose and invert it to determine the energy fluence. The same method used to reconstruct tissue images from projections could be used to solve the inverse problem ofmore » determining beam orientations and modulation patterns from a dose prescription [10]. Unlike standard tomographic reconstruction of images from measured projection profiles, the inversion of the prescribed dose results in photon fluence which may be negative and therefore unphysical. Two-dimensional modulated beams can be modelled in terms of the attenuated or exponential radon transform of the prescribed dose function (assumed to be the PET image in this case), an application of a Ram-Lak filter, and inversion by backprojection. Unlike the case in PET processing, however, the filtered beam obtained from the inversion represents a physical photon fluence. Therefore, a positivity constraint for the fluence (setting negative fluence to zero) must be applied (Brahme et al 1982, Bortfeld et al 1990) Results: Truncating the negative profiles from the PET data results in an approximation of the derivable energy fluence. Backprojection of the deliverable fluence is an approximation of the dose delivered. The deliverable dose is comparable to the original PET image and is similar to the PET image. Conclusion: It is possible to use the PET data or image as a direct indicator of deliverable fluence for cylindrical radiotherapy systems such as TomoTherapy.« less

  11. Experimentally enhanced model-based deconvolution of propagation-based phase-contrast data

    NASA Astrophysics Data System (ADS)

    Pichotka, M.; Palma, K.; Hasn, S.; Jakubek, J.; Vavrik, D.

    2016-12-01

    In recent years phase-contrast has become a much investigated modality in radiographic imaging. The radiographic setups employed in phase-contrast imaging are typically rather costly and complex, e.g. high performance Talbot-Laue interferometers operated at synchrotron light sources. In-line phase-contrast imaging states the most pedestrian approach towards phase-contrast enhancement. Utilizing small angle deflection within the imaged sample and the entailed interference of the deflected and un-deflected beam during spatial propagation, in-line phase-contrast imaging only requires a well collimated X-ray source with a high contrast & high resolution detector. Employing high magnification the above conditions are intrinsically fulfilled in cone-beam micro-tomography. As opposed of 2D imaging, where contrast enhancement is generally considered beneficial, in tomographic modalities the in-line phase-contrast effect can be quite a nuisance since it renders the inverse problem posed by tomographic reconstruction inconsistent, thus causing reconstruction artifacts. We present an experimentally enhanced model-based approach to disentangle absorption and in-line phase-contrast. The approach employs comparison of transmission data to a system model computed iteratively on-line. By comparison of the forward model to absorption data acquired in continuous rotation strong local deviations of the data residual are successively identified as likely candidates for in-line phase-contrast. By inducing minimal vibrations (few mrad) to the sample around the peaks of such deviations the transmission signal can be decomposed into a constant absorptive fraction and an oscillating signal caused by phase-contrast which again allows to generate separate maps for absorption and phase-contrast. The contributions of phase-contrast and the corresponding artifacts are subsequently removed from the tomographic dataset. In principle, if a 3D handling of the sample is available, this method also allows to track discontinuities throughout the volume and therefore states a powerful tool in 3D defectoscopy.

  12. Optical tomograph optimized for tumor detection inside highly absorbent organs

    NASA Astrophysics Data System (ADS)

    Boutet, Jérôme; Koenig, Anne; Hervé, Lionel; Berger, Michel; Dinten, Jean-Marc; Josserand, Véronique; Coll, Jean-Luc

    2011-05-01

    This paper presents a tomograph for small animal fluorescence imaging. The compact and cost-effective system described in this article was designed to address the problem of tumor detection inside highly absorbent heterogeneous organs, such as lungs. To validate the tomograph's ability to detect cancerous nodules inside lungs, in vivo tumor growth was studied on seven cancerous mice bearing murine mammary tumors marked with Alexa Fluor 700. They were successively imaged 10, 12, and 14 days after the primary tumor implantation. The fluorescence maps were compared over this time period. As expected, the reconstructed fluorescence increases with the tumor growth stage.

  13. The Collaborative Seismic Earth Model Project

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Herwaarden, D. P.; Afanasiev, M.

    2017-12-01

    We present the first generation of the Collaborative Seismic Earth Model (CSEM). This effort is intended to address grand challenges in tomography that currently inhibit imaging the Earth's interior across the seismically accessible scales: [1] For decades to come, computational resources will remain insufficient for the exploitation of the full observable seismic bandwidth. [2] With the man power of individual research groups, only small fractions of available waveform data can be incorporated into seismic tomographies. [3] The limited incorporation of prior knowledge on 3D structure leads to slow progress and inefficient use of resources. The CSEM is a multi-scale model of global 3D Earth structure that evolves continuously through successive regional refinements. Taking the current state of the CSEM as initial model, these refinements are contributed by external collaborators, and used to advance the CSEM to the next state. This mode of operation allows the CSEM to [1] harness the distributed man and computing power of the community, [2] to make consistent use of prior knowledge, and [3] to combine different tomographic techniques, needed to cover the seismic data bandwidth. Furthermore, the CSEM has the potential to serve as a unified and accessible representation of tomographic Earth models. Generation 1 comprises around 15 regional tomographic refinements, computed with full-waveform inversion. These include continental-scale mantle models of North America, Australasia, Europe and the South Atlantic, as well as detailed regional models of the crust beneath the Iberian Peninsula and western Turkey. A global-scale full-waveform inversion ensures that regional refinements are consistent with whole-Earth structure. This first generation will serve as the basis for further automation and methodological improvements concerning validation and uncertainty quantification.

  14. GPS Tomography: Water Vapour Monitoring for Germany

    NASA Astrophysics Data System (ADS)

    Bender, Michael; Dick, Galina; Wickert, Jens; Raabe, Armin

    2010-05-01

    Ground based GPS atmosphere sounding provides numerous atmospheric quantities with a high temporal resolution for all weather conditions. The spatial resolution of the GPS observations is mainly given by the number of GNSS satellites and GPS ground stations. The latter could considerably be increased in the last few years leading to more reliable and better resolved GPS products. New techniques such as the GPS water vapour tomography gain increased significance as data from large and dense GPS networks become available. The GPS tomography has the potential to provide spatially resolved fields of different quantities operationally, i. e. the humidity or wet refractivity as required for meteorological applications or the refraction index which is important for several space based observations or for precise positioning. The number of German GPS stations operationally processed by the GFZ in Potsdam was recently enlarged to more than 300. About 28000 IWV observations and more than 1.4 millions of slant total delay data are now available per day with a temporal resolution of 15 min and 2.5 min, respectively. The extended network leads not only to a higher spatial resolution of the tomographically reconstructed 3D fields but also to a much higher stability of the inversion process and with that to an increased quality of the results. Under these improved conditions the GPS tomography can operate continuously over several days or weeks without applying too tight constraints. Time series of tomographically reconstructed humidity fields will be shown and different initialisation strategies will be discussed: Initialisation with a simple exponential profile, with a 3D humidity field extrapolated from synoptic observations and with the result of the preceeding reconstruction. The results are compared to tomographic reconstructions initialised with COSMO-DE analyses and to the corresponding model fields. The inversion can be further stabilised by making use of independent adequately weighted observations, such as synoptic observations or IWV data. The impact of such observations on the quality of the tomographic reconstruction will be discussed together with different alternatives for weighting different types of observations.

  15. TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin

    2015-01-01

    High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration of in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the numbermore » of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation. (C) 2015 Optical Society of America« less

  16. Teleseismic tomography for imaging Earth's upper mantle

    NASA Astrophysics Data System (ADS)

    Aktas, Kadircan

    Teleseismic tomography is an important imaging tool in earthquake seismology, used to characterize lithospheric structure beneath a region of interest. In this study I investigate three different tomographic techniques applied to real and synthetic teleseismic data, with the aim of imaging the velocity structure of the upper mantle. First, by applying well established traveltime tomographic techniques to teleseismic data from southern Ontario, I obtained high-resolution images of the upper mantle beneath the lower Great Lakes. Two salient features of the 3D models are: (1) a patchy, NNW-trending low-velocity region, and (2) a linear, NE-striking high-velocity anomaly. I interpret the high-velocity anomaly as a possible relict slab associated with ca. 1.25 Ga subduction, whereas the low-velocity anomaly is interpreted as a zone of alteration and metasomatism associated with the ascent of magmas that produced the Late Cretaceous Monteregian plutons. The next part of the thesis is concerned with adaptation of existing full-waveform tomographic techniques for application to teleseismic body-wave observations. The method used here is intended to be complementary to traveltime tomography, and to take advantage of efficient frequency-domain methodologies that have been developed for inverting large controlled-source datasets. Existing full-waveform acoustic modelling and inversion codes have been modified to handle plane waves impinging from the base of the lithospheric model at a known incidence angle. A processing protocol has been developed to prepare teleseismic observations for the inversion algorithm. To assess the validity of the acoustic approximation, the processing procedure and modelling-inversion algorithm were tested using synthetic seismograms computed using an elastic Kirchhoff integral method. These tests were performed to evaluate the ability of the frequency-domain full-waveform inversion algorithm to recover topographic variations of the Moho under a variety of realistic scenarios. Results show that frequency-domain full-waveform tomography is generally successful in recovering both sharp and discontinuous features. Thirdly, I developed a new method for creating an initial background velocity model for the inversion algorithm, which is sufficiently close to the true model so that convergence is likely to be achieved. I adapted a method named Deformable Layer Tomography (DLT), which adjusts interfaces between layers rather than velocities within cells. I applied this method to a simple model comprising a single uniform crustal layer and a constant-velocity mantle, separated by an irregular Moho interface. A series of tests was performed to evaluate the sensitivity of the DLT algorithm; the results show that my algorithm produces useful results within a realistic range of incident-wave obliquity, incidence angle and signal-to-noise level. Keywords. Teleseismic tomography, full waveform tomography, deformable layer tomography, lower Great Lakes, crust and upper mantle.

  17. Nonlinear 1D and 2D waveform inversions of SS precursors and their applications in mantle seismic imaging

    NASA Astrophysics Data System (ADS)

    Dokht, R.; Gu, Y. J.; Sacchi, M. D.

    2016-12-01

    Seismic velocities and the topography of mantle discontinuities are crucial for the understanding of mantle structure, dynamics and mineralogy. While these two observables are closely linked, the vast majority of high-resolution seismic images are retrieved under the assumption of horizontally stratified mantle interfaces. This conventional correction-based process could lead to considerable errors due to the inherent trade-off between velocity and discontinuity depth. In this study, we introduce a nonlinear joint waveform inversion method that simultaneously recovers discontinuity depths and seismic velocities using the waveforms of SS precursors. Our target region is the upper mantle and transition zone beneath Northeast Asia. In this region, the inversion outcomes clearly delineate a westward dipping high-velocity structure in association with the subducting Pacific plate. Above the flat part of the slab west of the Japan sea, our results show a shear wave velocity reduction of 1.5% in the upper mantle and 10-15 km depression of the 410 km discontinuity beneath the Changbaishan volcanic field. We also identify the maximum correlation between shear velocity and transition zone thickness at an approximate slab dip of 30 degrees, which is consistent with previously reported values in this region.To validate the results of the 1D waveform inversion of SS precursors, we discretize the mantle beneath the study region and conduct a 2D waveform tomographic survey using the same nonlinear approach. The problem is simplified by adopting the discontinuity depths from the 1D inversion and solving only for perturbations in shear velocities. The resulting models obtained from the 1D and 2D approaches are self-consistent. Low-velocities beneath the Changbai intraplate volcano likely persist to a depth of 500 km. Collectively, our seismic observations suggest that the active volcanoes in eastern China may be fueled by a hot thermal anomaly originating from the mantle transition zone.

  18. Box Tomography: An efficient tomographic method for imaging localized structures in the deep Earth

    NASA Astrophysics Data System (ADS)

    Masson, Yder; Romanowicz, Barbara

    2017-04-01

    The accurate imaging of localized geological structures inside the deep Earth is key to understand our planet and its history. Since the introduction of the Preliminary Reference Earth Model, many generations of global tomographic models have been developed and give us access to the 3D structure of the Earth's interior. The latest generation of global tomographic models has emerged with the development of accurate numerical wavefield computations in a 3D earth combined with access to enhanced HPC capabilities. These models have sharpened up mantle images and unveiled relatively small scale structures that were blurred out in previous generation models. Fingerlike structures have been found at the base of the oceanic asthenosphere, and vertically oriented broad low velocity plume conduits [1] extend throughout the lower mantle beneath those major hotspots that are located within the perimeter of the deep mantle large low shear velocity provinces (LLSVPs). While providing new insights into our understanding of mantle dynamics, the detailed morphology of these features requires further efforts to obtain higher resolution images. In recent years, we developed a theoretical framework [2][3] for the tomographic imaging of localised geological structures buried inside the Earth, where no seismic sources nor receivers are necessarily present. We call this "box tomography" [4]. The essential difference between box-tomography and standard tomographic methods is that the numerical modeling (i.e. the raytracing in travel time tomography and the wave propagation in waveform tomography or full waveform inversion) is completely confined within the small box-region imaged. Thus, box tomography is a lot more efficient than global tomography (i.e. where we invert for the velocity in the larger volume that encompasses all the sources and receivers), for imaging localised objects. We present 2D and 3D examples showing that box tomography can be employed for imaging structures present within the D'' region at the base of the mantle. Further, we show that box-tomography performs well even in the difficult situation where the velocity distribution in the mantle above the target structure is not known a-priori. REFERENCES [1] French, S. W. and B. Romanowicz (2015) Broad Plumes at the base of the mantle beneath major hotspots, Nature, 525, 95-99 [2] Masson, Y., Cupillard, P., Capdeville, Y., & Romanowicz, B. (2013). On the numerical implementation of time-reversal mirrors for tomographic imaging. Geophysical Journal International, ggt459. [3] Masson, Y., & Romanowicz, B. (2017). Fast computation of synthetic seismograms within a medium containing remote localized perturbations: a numerical solution to the scattering problem. Geophysical Journal International, 208(2), 674-692. [4] Masson, Y., & Romanowicz, B. (2017). Box Tomography: Localised imaging of remote targets buried in an unknown medium, a step forward for understanding key structures in the deep Earth. Geophysical Journal International, (under review).

  19. Variable pixel size ionospheric tomography

    NASA Astrophysics Data System (ADS)

    Zheng, Dunyong; Zheng, Hongwei; Wang, Yanjun; Nie, Wenfeng; Li, Chaokui; Ao, Minsi; Hu, Wusheng; Zhou, Wei

    2017-06-01

    A novel ionospheric tomography technique based on variable pixel size was developed for the tomographic reconstruction of the ionospheric electron density (IED) distribution. In variable pixel size computerized ionospheric tomography (VPSCIT) model, the IED distribution is parameterized by a decomposition of the lower and upper ionosphere with different pixel sizes. Thus, the lower and upper IED distribution may be very differently determined by the available data. The variable pixel size ionospheric tomography and constant pixel size tomography are similar in most other aspects. There are some differences between two kinds of models with constant and variable pixel size respectively, one is that the segments of GPS signal pay should be assigned to the different kinds of pixel in inversion; the other is smoothness constraint factor need to make the appropriate modified where the pixel change in size. For a real dataset, the variable pixel size method distinguishes different electron density distribution zones better than the constant pixel size method. Furthermore, it can be non-chided that when the effort is spent to identify the regions in a model with best data coverage. The variable pixel size method can not only greatly improve the efficiency of inversion, but also produce IED images with high fidelity which are the same as a used uniform pixel size method. In addition, variable pixel size tomography can reduce the underdetermined problem in an ill-posed inverse problem when the data coverage is irregular or less by adjusting quantitative proportion of pixels with different sizes. In comparison with constant pixel size tomography models, the variable pixel size ionospheric tomography technique achieved relatively good results in a numerical simulation. A careful validation of the reliability and superiority of variable pixel size ionospheric tomography was performed. Finally, according to the results of the statistical analysis and quantitative comparison, the proposed method offers an improvement of 8% compared with conventional constant pixel size tomography models in the forward modeling.

  20. Relative arrival-time upper-mantle tomography and the elusive background mean

    NASA Astrophysics Data System (ADS)

    Bastow, Ian D.

    2012-08-01

    The interpretation of seismic tomographic images of upper-mantle seismic wave speed structure is often a matter of considerable debate because the observations can usually be explained by a range of hypotheses, including variable temperature, composition, anisotropy, and the presence of partial melt. An additional problem, often overlooked in tomographic studies using relative as opposed to absolute arrival-times, is the issue of the resulting velocity model's zero mean. In shield areas, for example, relative arrival-time analysis strips off a background mean velocity structure that is markedly fast compared to the global average. Conversely, in active areas, the background mean is often markedly slow compared to the global average. Appreciation of this issue is vital when interpreting seismic tomographic images: 'high' and 'low' velocity anomalies should not necessarily be interpreted, respectively, as 'fast' and 'slow' compared to 'normal mantle'. This issue has been discussed in the seismological literature in detail over the years, yet subsequent tomography studies have still fallen into the trap of mis-interpreting their velocity models. I highlight here some recent examples of this and provide a simple strategy to address the problem using constraints from a recent global tomographic model, and insights from catalogues of absolute traveltime anomalies. Consultation of such absolute measures of seismic wave speed should be routine during regional tomographic studies, if only for the benefit of the broader Earth Science community, who readily follow the red = hot and slow, blue = cold and fast rule of thumb when interpreting the images for themselves.

  1. Tomographic inversion of P-wave velocity and Q structures beneath the Kirishima volcanic complex, Southern Japan, based on finite difference calculations of complex traveltimes

    USGS Publications Warehouse

    Tomatsu, T.; Kumagai, H.; Dawson, P.B.

    2001-01-01

    We estimate the P-wave velocity and attenuation structures beneath the Kirishima volcanic complex, southern Japan, by inverting the complex traveltimes (arrival times and pulse widths) of waveform data obtained during an active seismic experiment conducted in 1994. In this experiment, six 200-250 kg shots were recorded at 163 temporary seismic stations deployed on the volcanic complex. We use first-arrival times for the shots, which were hand-measured interactively. The waveform data are Fourier transformed into the frequency domain and analysed using a new method based on autoregressive modelling of complex decaying oscillations in the frequency domain to determine pulse widths for the first-arrival phases. A non-linear inversion method is used to invert 893 first-arrival times and 325 pulse widths to estimate the velocity and attenuation structures of the volcanic complex. Wavefronts for the inversion are calculated with a finite difference method based on the Eikonal equation, which is well suited to estimating the complex traveltimes for the structures of the Kirishima volcano complex, where large structural heterogeneities are expected. The attenuation structure is derived using ray paths derived from the velocity structure. We obtain 3-D velocity and attenuation structures down to 1.5 and 0.5 km below sea level, respectively. High-velocity pipe-like structures with correspondingly low attenuation are found under the summit craters. These pipe-like structures are interpreted as remnant conduits of solidified magma. No evidence of a shallow magma chamber is visible in the tomographic images.

  2. Travel-time Tomography of the Upper Mantle using Amphibious Array Seismic Data from the Cascadia Initiative and EarthScope

    NASA Astrophysics Data System (ADS)

    Cafferky, S.; Schmandt, B.

    2013-12-01

    Offshore and onshore broadband seismic data from the Cascadia Initiative and EarthScope provide a unique opportunity to image 3-D mantle structure continuously from a spreading ridge across a subduction zone and into continental back-arc provinces. Year one data from the Cascadia Initiative primarily covers the northern half of the Juan de Fuca plate and the Cascadia forearc and arc provinces. These new data are used in concert with previously collected onshore data for a travel-time tomography investigation of mantle structure. Measurement of relative teleseismic P travel times for land-based and ocean-bottom stations operating during year one was completed for 16 events using waveform cross-correlation, after bandpass filtering the data from 0.05 - 0.1 Hz with a second order Butterworth filter. Maps of travel-time delays show changing patterns with event azimuth suggesting that structural variations exist beneath the oceanic plate. The data from year one and prior onshore travel time measurements were used in a tomographic inversion for 3-D mantle P-velocity structure. Inversions conducted to date use ray paths determined by a 1-D velocity model. By meeting time we plan to present models using ray paths that are iteratively updated to account for 3-D structure. Additionally, we are testing the importance of corrections for sediment and crust thickness on imaging of mantle structure near the subduction zone. Low-velocities beneath the Juan de Fuca slab that were previously suggested by onshore data are further supported by our preliminary tomographic inversions using the amphibious array data.

  3. Saline tracer visualized with three-dimensional electrical resistivity tomography: Field-scale spatial moment analysis

    USGS Publications Warehouse

    Singha, Kamini; Gorelick, Steven M.

    2005-01-01

    Cross-well electrical resistivity tomography (ERT) was used to monitor the migration of a saline tracer in a two-well pumping-injection experiment conducted at the Massachusetts Military Reservation in Cape Cod, Massachusetts. After injecting 2200 mg/L of sodium chloride for 9 hours, ERT data sets were collected from four wells every 6 hours for 20 days. More than 180,000 resistance measurements were collected during the tracer test. Each ERT data set was inverted to produce a sequence of 3-D snapshot maps that track the plume. In addition to the ERT experiment a pumping test and an infiltration test were conducted to estimate horizontal and vertical hydraulic conductivity values. Using modified moment analysis of the electrical conductivity tomograms, the mass, center of mass, and spatial variance of the imaged tracer plume were estimated. Although the tomograms provide valuable insights into field-scale tracer migration behavior and aquifer heterogeneity, standard tomographic inversion and application of Archie's law to convert electrical conductivities to solute concentration results in underestimation of tracer mass. Such underestimation is attributed to (1) reduced measurement sensitivity to electrical conductivity values with distance from the electrodes and (2) spatial smoothing (regularization) from tomographic inversion. The center of mass estimated from the ERT inversions coincided with that given by migration of the tracer plume using 3-D advective-dispersion simulation. The 3-D plumes seen using ERT exhibit greater apparent dispersion than the simulated plumes and greater temporal spreading than observed in field data of concentration breakthrough at the pumping well.

  4. A Detailed Study of Sonar Tomographic Imaging

    DTIC Science & Technology

    2013-08-01

    BPA ) to form an object image. As the data is collected radially about the axis of rotation, one computation method computes an inverse Fourier...images are not quite as sharp. It is concluded UNCLASSIFIED iii DSTO–RR–0394 UNCLASSIFIED that polar BPA processing requires an appropriate choice of...attenuation factor to reduce the effect of the specular reflections, while for the 2DIFT BPA approach the degrading effect from these reflections is

  5. Detection of anomalies in ocean acoustic velocity structure and their effect in sea-bottom crustal deformation measurement: synthetic test and future suggestion

    NASA Astrophysics Data System (ADS)

    Nagai, S.; Eto, S.; Tadokoro, K.; Watanabe, T.

    2011-12-01

    On-land geodetic observations are not enough to monitor crustal activities in and around the subduction zone, so seafloor geodetic observations have been required. However, present accuracy of seafloor geodetic observation is an order of 1 cm or larger, which is difficult to detect differences from plate motion in short time interval, which means a plate coupling rate and its spatio-temporal variation. Our group has been developed observation system and methodology for seafloor geodesy, which is combined kinematic GPS and ocean acoustic ranging. One of influence factors is acoustic velocity change in ocean, due to change in temperature, ocean currents in different scale, and so on. A typical perturbation of acoustic velocity makes an order of 1 ms difference in travel time, which corresponds to 1 m difference in ray length. We have investigated this effect in seafloor geodesy using both observed and synthetic data to reduce estimation error of benchmarker (transponder) positions and to develop our strategy for observation and its analyses. In this paper, we focus on forward modeling of travel times of acoustic ranging data and recovery tests using synthetic data comparing with observed results [Eto et al., 2011; in this meeting]. Estimation procedure for benchmarker positions is similar to those used in earthquake location method and seismic tomography. So we have applied methods in seismic study, especially in tomographic inversion. First, we use method of a one-dimensional velocity inversion with station corrections, proposed by Kissling et al. [1994], to detect spatio-temporal change in ocean acoustic velocity from observed data in the Suruga-Nankai Trough, Japan. From these analyses, some important information has been clarified in travel time data [Eto et al., 2011]. Most of them can explain small velocity anomaly at a depth of 300m or shallower, through forward modeling of travel time data using simple velocity structure with velocity anomaly. However, due to simple data acquisition procedure, we cannot detect velocity anomaly(s) in space and time precisely, that is size of anomaly and its (their) movement. As a next step, we demonstrate recovery of benchmarker positions in tomographic inversion using synthetic data including anomalous travel time data to develop idea to calculate benchmarker positions with high-accuracy. In the tomographic inversion, we introduce some constraints corresponding to realistic conditions. This step gives us new developed system to detect crustal deformation in seafloor geodesy and new findings for understanding these in and around plate boundaries.

  6. An improved interface to process GPR data by means of microwave tomography

    NASA Astrophysics Data System (ADS)

    Catapano, Ilaria; Affinito, Antonio; Soldovieri, Francesco

    2015-04-01

    Ground Penetrating Radar (GPR) systems are well assessed non-invasive diagnostic tools, which are worth being considered in civil engineering surveys since they allow to gather information on constructive materials and techniques of manmade structures as well as on the aging and risk factors affecting their healthiness. However, the practical use of GPR depends strictly on the availability of data processing tools, on one hand, capable of providing reliable and easily interpretable images of the probed scenarios and, on the other side, easy to be used by not expert users. In this frame, 2D and full 3D microwave tomographic approaches based on the Born approximation have been developed and proved to be effective in several practical conditions [1, 2]. Generally speaking, a GPR data processing chain exploiting microwave tomography is made by two main steps: the pre-processing and the data inversion. The pre-processing groups standard procedures like start time correction, muting and background removal, which are performed in time domain to remove the direct antennas coupling, to reduce noise and to improve the targets footprint. The data inversion faces the imaging as the solution of a linear inverse scattering problem in the frequency domain. Hence, a linear integral equation relating the scattered field (i.e. the data) to the unknown electric contrast function is solved by using the truncated Singular Value Decomposition (SVD) as a regularized inversion scheme. Pre-processing and the data inversion are linked by a Discrete Fourier Transform (DFT), which allows to pass from the time domain to the frequency domain. In this respect, a frequency analysis of the GPR signals (traces) is also performed to identify the actual frequency range of the data. Unfortunately, the adoption of microwave tomography is strongly subjected to the involvement of expert people capable of managing properly the processing chain. To overcome this drawback, a couple of years ago, an end-user friendly software interface was developed to make possible a simple management of 2D microwave tomographic approaches [3]. Aim of this communication, is to present a novel interface, which is a significantly improved version with respect to the previous one. In particular, the new interface allows both 2D and full 3D imaging by taking as input GPR data gathered by means of different measurement configurations, i.e. by using down looking systems, with the antenna located close to the air-medium interface or at non negligible (in terms of the probing wavelength) distance from it, as well as by means of airborne and forward looking systems. In this frame, the users can select the data format among those of the most common commercial GPR systems or process data gathered by means of GPR prototypes, provided that they are saved in ASCII format. Moreover, the users can perform all the steps, which are needed to obtain tomographic images, and select the Born approximation based approach most suitable to the adopted measurement configuration. Raw-radargrams, intermediate and final results can be displayed for users convenience. REFERENCES [1] I. Catapano, R. Di Napoli, F. Soldovieri, M. Bavusi, A. Loperte, J. Dumoulin, "Structural monitoring via microwave tomography-enhanced GPR: the Montagnole test site", J. Geophys. Eng. 9, S100-S107, 2012. [2] I. Catapano, A. Affinito, G. Gennarelli, F.di Maio, A. Loperte, F. Soldovieri, "Full three-dimensional imaging via ground penetrating radar: assessment in controlled conditions and on field for archaeological prospecting", Appl. Phys. A, 2013, DOI 10.1007/s00339-013-8053-0. [3] I. Catapano, A. Affinito, F. Soldovieri, A user friendly interface for microwave tomography enhanced GPR surveys", EGU General Assembly 2013, vol. 15.

  7. Optical tomographic imaging for breast cancer detection

    NASA Astrophysics Data System (ADS)

    Cong, Wenxiang; Intes, Xavier; Wang, Ge

    2017-09-01

    Diffuse optical breast imaging utilizes near-infrared (NIR) light propagation through tissues to assess the optical properties of tissues for the identification of abnormal tissue. This optical imaging approach is sensitive, cost-effective, and does not involve any ionizing radiation. However, the image reconstruction of diffuse optical tomography (DOT) is a nonlinear inverse problem and suffers from severe illposedness due to data noise, NIR light scattering, and measurement incompleteness. An image reconstruction method is proposed for the detection of breast cancer. This method splits the image reconstruction problem into the localization of abnormal tissues and quantification of absorption variations. The localization of abnormal tissues is performed based on a well-posed optimization model, which can be solved via a differential evolution optimization method to achieve a stable reconstruction. The quantification of abnormal absorption is then determined in localized regions of relatively small extents, in which a potential tumor might be. Consequently, the number of unknown absorption variables can be greatly reduced to overcome the underdetermined nature of DOT. Numerical simulation experiments are performed to verify merits of the proposed method, and the results show that the image reconstruction method is stable and accurate for the identification of abnormal tissues, and robust against the measurement noise of data.

  8. Identifying Aquifer Heterogeneities using the Level Set Method

    NASA Astrophysics Data System (ADS)

    Lu, Z.; Vesselinov, V. V.; Lei, H.

    2016-12-01

    Material interfaces between hydrostatigraphic units (HSU) with contrasting aquifer parameters (e.g., strata and facies with different hydraulic conductivity) have a great impact on flow and contaminant transport in subsurface. However, the identification of HSU shape in the subsurface is challenging and typically relies on tomographic approaches where a series of steady-state/transient head measurements at spatially distributed observation locations are analyzed using inverse models. In this study, we developed a mathematically rigorous approach for identifying material interfaces among any arbitrary number of HSUs using the level set method. The approach has been tested first with several synthetic cases, where the true spatial distribution of HSUs was assumed to be known and the head measurements were taken from the flow simulation with the true parameter fields. These synthetic inversion examples demonstrate that the level set method is capable of characterizing the spatial distribution of the heterogeneous. We then applied the methodology to a large-scale problem in which the spatial distribution of pumping wells and observation well screens is consistent with the actual aquifer contamination (chromium) site at the Los Alamos National Laboratory (LANL). In this way, we test the applicability of the methodology at an actual site. We also present preliminary results using the actual LANL site data. We also investigated the impact of the number of pumping/observation wells and the drawdown observation frequencies/intervals on the quality of the inversion results. We also examined the uncertainties associated with the estimated HSU shapes, and the accuracy of the results under different hydraulic-conductivity contrasts between the HSU's.

  9. Noniterative MAP reconstruction using sparse matrix representations.

    PubMed

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  10. Ambient-noise Tomography of the Southern California Lithosphere

    NASA Astrophysics Data System (ADS)

    Basini, P.; Liu, Q.; Tape, C.

    2012-12-01

    We exploit the stacked ambient noise cross-correlation functions (NCF) to improve the 3-D velocity structures of southern California crust and upper mantle. NCFs are extracted between pairs of seismic stations as approximations to 3D Greens functions based on the assumption of diffuse wavefields. Thanks to the dense instrumental coverage in south California a high number (around 13000) of NCFs are available that allow us to reach anunprecedented high imaging resolution. The 3-D crustal model m16 of Tape et al. (2009) which describes the detailed crustal variation of southern California region is incorporated into the starting model of our adjoint tomographic inversions. The use of 3D initial model help reduce the nonlinearity of the inverse problem and the number of required iterations. We iteratively improve the velocity model by combining spectral-element (SEM) simulations of seismic wave propagation with Frechet derivatives computed by adjoint methods. The multi-taper traveltime misfit function that quantifies the difference between NCFs (measured over the windows of predominantly surface waves at the period range of 10-20 seconds) and 3D Greens functions for the current model also defines the adjoint sources which produces the necessary Frechet derivatives (sensitivity kernels) through an adjoint simulation. Interesting mantle heterogeneities are revealed due to the improved depth resolution of surface waves. The quality of inversion results may be assessed through the misfit between NCFs and Greens functions for the final model in terms of traveltime, amplitude as well as full waveform. An independent set of earthquakes data and synthetics may also be introduced to verify the final mode.

  11. Optical tomographic memories: algorithms for the efficient information readout

    NASA Astrophysics Data System (ADS)

    Pantelic, Dejan V.

    1990-07-01

    Tomographic alogithms are modified in order to reconstruct the inf ormation previously stored by focusing laser radiation in a volume of photosensitive media. Apriori information about the position of bits of inf ormation is used. 1. THE PRINCIPLES OF TOMOGRAPHIC MEMORIES Tomographic principles can be used to store and reconstruct the inf ormation artificially stored in a bulk of a photosensitive media 1 The information is stored by changing some characteristics of a memory material (e. g. refractive index). Radiation from the two independent light sources (e. g. lasers) is f ocused inside the memory material. In this way the intensity of the light is above the threshold only in the localized point where the light rays intersect. By scanning the material the information can be stored in binary or nary format. When the information is stored it can be read by tomographic methods. However the situation is quite different from the classical tomographic problem. Here a lot of apriori information is present regarding the p0- sitions of the bits of information profile representing single bit and a mode of operation (binary or n-ary). 2. ALGORITHMS FOR THE READOUT OF THE TOMOGRAPHIC MEMORIES Apriori information enables efficient reconstruction of the memory contents. In this paper a few methods for the information readout together with the simulation results will be presented. Special attention will be given to the noise considerations. Two different

  12. Monte Carlo based method for fluorescence tomographic imaging with lifetime multiplexing using time gates

    PubMed Central

    Chen, Jin; Venugopal, Vivek; Intes, Xavier

    2011-01-01

    Time-resolved fluorescence optical tomography allows 3-dimensional localization of multiple fluorophores based on lifetime contrast while providing a unique data set for improved resolution. However, to employ the full fluorescence time measurements, a light propagation model that accurately simulates weakly diffused and multiple scattered photons is required. In this article, we derive a computationally efficient Monte Carlo based method to compute time-gated fluorescence Jacobians for the simultaneous imaging of two fluorophores with lifetime contrast. The Monte Carlo based formulation is validated on a synthetic murine model simulating the uptake in the kidneys of two distinct fluorophores with lifetime contrast. Experimentally, the method is validated using capillaries filled with 2.5nmol of ICG and IRDye™800CW respectively embedded in a diffuse media mimicking the average optical properties of mice. Combining multiple time gates in one inverse problem allows the simultaneous reconstruction of multiple fluorophores with increased resolution and minimal crosstalk using the proposed formulation. PMID:21483610

  13. Computerized tomographic quantification of chronic obstructive pulmonary disease as the principal determinant of frontal P vector.

    PubMed

    Chhabra, Lovely; Sareen, Pooja; Gandagule, Amit; Spodick, David

    2012-04-01

    Verticalization of the P-wave axis is characteristic of chronic obstructive pulmonary disease (COPD). We studied the correlation of P-wave axis and computerized tomographically quantified emphysema in patients with COPD/emphysema. Individual correlation of P-wave axis with different structural types of emphysema was also studied. High-resolution computerized tomographic scans of 23 patients >45 years old with known COPD were reviewed to assess the type and extent of emphysema using computerized tomographic densitometric parameters. Electrocardiograms were then independently reviewed and the P-wave axis was calculated in customary fashion. Degree of the P vector (DOPV) and radiographic percent emphysematous area (RPEA) were compared for statistical correlation. The P vector and RPEA were also directly compared to the forced expiratory volume at 1 second. RPEA and the P vector had a significant positive correlation in all patients (r = +0.77, p <0.0001) but correlation was very strong in patients with predominant lower lobe emphysema (r = +0.89, p <0.001). Forced expiratory volume at 1 second and the P vector had almost a linear inverse correlation in predominantly lower lobe emphysema (r = -0.92, p <0.001). DOPV positively correlated with radiographically quantified emphysema. DOPV and RPEA were strong predictors of qualitative lung function in patients with predominantly lower lobe emphysema. In conclusion, a combination of high DOPV and predominantly lower lobe emphysema indicates severe obstructive lung dysfunction in patients with COPD. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. A user friendly interface for microwave tomography enhanced GPR surveys

    NASA Astrophysics Data System (ADS)

    Catapano, Ilaria; Affinito, Antonio; Soldovieri, Francesco

    2013-04-01

    Ground Penetrating Radar (GPR) systems are nowadays widely used in civil applications among which structural monitoring is one of the most critical issues due to its importance in terms of risks prevents and cost effective management of the structure itself. Despite GPR systems are assessed devices, there is a continuous interest towards their optimization, which involves both hardware and software aspects, with the common final goal to achieve accurate and highly informative images while keeping as low as possible difficulties and times involved in on field surveys. As far as data processing is concerned, one of the key aims is the development of imaging approaches capable of providing images easily interpretable by not expert users while keeping feasible the requirements in terms of computational resources. To satisfy this request or at least improve the reconstruction capabilities of data processing tools actually available in commercial GPR systems, microwave tomographic approaches based on the Born approximation have been developed and tested in several practical conditions, such as civil and archeological investigations, sub-service monitoring, security surveys and so on [1-3]. However, the adoption of these approaches is subjected to the involvement of expert workers, which have to be capable of properly managing the gathered data and their processing, which involves the solution of a linear inverse scattering problem. In order to overcome this drawback, aim of this contribution is to present an end-user friendly software interface that makes possible a simple management of the microwave tomographic approaches. In particular, the proposed interface allows us to upload both synthetic and experimental data sets saved in .txt, .dt and .dt1 formats, to perform all the steps needed to obtain tomographic images and to display raw-radargrams, intermediate and final results. By means of the interface, the users can apply time gating, back-ground removal or both to extract from the gathered data the meaningful signal, they can process the full set of the gathered A-scans or select a their portion as well as they can choose to account for an arbitrary time window inside that adopted during the measurement stage. Finally, the interface allows us to perform the imaging according to two different tomographic approaches, both modeling the scattering phenomenon according to the Born approximation and looking for cylindrical objects of arbitrary cross section (2D geometry) probed by an incident field polarized along the invariance axis (scalar case). One approach is based on the assumption that the scattering phenomenon arises in a homogeneous medium, while the second one accounts for the presence of a flat air-medium interface. REFERENCES [1] F. Soldovieri, J. Hugenschmidt, R. Persico and G. Leone, "A linear inverse scattering algorithm for realistic GPR applications, Near Surf. Geophys., vol. 5, pp.29-42, 2007. [2] R. Persico, F. Soldovieri, E. Utsi, "Microwave tomography for processing of GPR data at Ballachulish, J. Geophys. and Eng., vol.7, pp.164-173, 2010. [3] I. Catapano, L. Crocco R. Di Napoli, F. Soldovieri, A. Brancaccio, F. Pesando, A. Aiello, "Microwave tomography enhanced GPR surveys in Centaur's Domus, Regio VI of Pompeii, Italy", J. Geophys. Eng., vol.9, S92-S99, 2012.

  15. Full-Wave Tomographic and Moment Tensor Inversion Based on 3D Multigrid Strain Green’s Tensor Databases

    DTIC Science & Technology

    2014-04-30

    grade metamorphic rocks on the southern slope of the Himalaya is imaged as a band of high velocity anomaly...velocity structures closely follow the geological features. As an indication of resolution, the ductile extrusion of high-grade metamorphic rocks on...MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM 87117-5776 DTIC COPY NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data

  16. On the use of variable coherence in inverse scattering problems

    NASA Astrophysics Data System (ADS)

    Baleine, Erwan

    Even though most of the properties of optical fields, such as wavelength, polarization, wavefront curvature or angular spectrum, have been commonly manipulated in a variety of remote sensing procedures, controlling the degree of coherence of light did not find wide applications until recently. Since the emergence of optical coherence tomography, a growing number of scattering techniques have relied on temporal coherence gating which provides efficient target selectivity in a way achieved only by bulky short pulse measurements. The spatial counterpart of temporal coherence, however, has barely been exploited in sensing applications. This dissertation examines, in different scattering regimes, a variety of inverse scattering problems based on variable spatial coherence gating. Within the framework of the radiative transfer theory, this dissertation demonstrates that the short range correlation properties of a medium under test can be recovered by varying the size of the coherence volume of an illuminating beam. Nonetheless, the radiative transfer formalism does not account for long range correlations and current methods for retrieving the correlation function of the complex susceptibility require cumbersome cross-spectral density measurements. Instead, a variable coherence tomographic procedure is proposed where spatial coherence gating is used to probe the structural properties of single scattering media over an extended volume and with a very simple detection system. Enhanced backscattering is a coherent phenomenon that survives strong multiple scattering. The variable coherence tomography approach is extended in this context to diffusive media and it is demonstrated that specific photon trajectories can be selected in order to achieve depth-resolved sensing. Probing the scattering properties of shallow and deeper layers is of considerable interest in biological applications such as diagnosis of skin related diseases. The spatial coherence properties of an illuminating field can be manipulated over dimensions much larger than the wavelength thus providing a large effective sensing area. This is a practical advantage over many near-field microscopic techniques, which offer a spatial resolution beyond the classical diffraction limit but, at the expense of scanning a probe over a large area of a sample which is time consuming, and, sometimes, practically impossible. Taking advantage of the large field of view accessible when using the spatial coherence gating, this dissertation introduces the principle of variable coherence scattering microscopy. In this approach, a subwavelength resolution is achieved from simple far-zone intensity measurements by shaping the degree of spatial coherence of an evanescent field. Furthermore, tomographic techniques based on spatial coherence gating are especially attractive because they rely on simple detection schemes which, in principle, do not require any optical elements such as lenses. To demonstrate this capability, a correlated lensless imaging method is proposed and implemented, where both amplitude and phase information of an object are obtained by varying the degree of spatial coherence of the incident beam. Finally, it should be noted that the idea of using the spatial coherence properties of fields in a tomographic procedure is applicable to any type of electromagnetic radiation. Operating on principles of statistical optics, these sensing procedures can become alternatives for various target detection schemes, cutting-edge microscopies or x-ray imaging methods.

  17. 3D Cosmic Ray Muon Tomography from an Underground Tunnel

    DOE PAGES

    Guardincerri, Elena; Rowe, Charlotte Anne; Schultz-Fellenz, Emily S.; ...

    2017-03-31

    Here, we present an underground cosmic ray muon tomographic experiment imaging 3D density of overburden, part of a joint study with differential gravity. Muon data were acquired at four locations within a tunnel beneath Los Alamos, New Mexico, and used in a 3D tomographic inversion to recover the spatial variation in the overlying rock–air interface, and compared with a priori knowledge of the topography. Densities obtained exhibit good agreement with preliminary results of the gravity modeling, which will be presented elsewhere, and are compatible with values reported in the literature. The modeled rock–air interface matches that obtained from LIDAR withinmore » 4 m, our resolution, over much of the model volume. This experiment demonstrates the power of cosmic ray muons to image shallow geological targets using underground detectors, whose development as borehole devices will be an important new direction of passive geophysical imaging.« less

  18. 3D Cosmic Ray Muon Tomography from an Underground Tunnel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guardincerri, Elena; Rowe, Charlotte Anne; Schultz-Fellenz, Emily S.

    Here, we present an underground cosmic ray muon tomographic experiment imaging 3D density of overburden, part of a joint study with differential gravity. Muon data were acquired at four locations within a tunnel beneath Los Alamos, New Mexico, and used in a 3D tomographic inversion to recover the spatial variation in the overlying rock–air interface, and compared with a priori knowledge of the topography. Densities obtained exhibit good agreement with preliminary results of the gravity modeling, which will be presented elsewhere, and are compatible with values reported in the literature. The modeled rock–air interface matches that obtained from LIDAR withinmore » 4 m, our resolution, over much of the model volume. This experiment demonstrates the power of cosmic ray muons to image shallow geological targets using underground detectors, whose development as borehole devices will be an important new direction of passive geophysical imaging.« less

  19. 3D Cosmic Ray Muon Tomography from an Underground Tunnel

    NASA Astrophysics Data System (ADS)

    Guardincerri, Elena; Rowe, Charlotte; Schultz-Fellenz, Emily; Roy, Mousumi; George, Nicolas; Morris, Christopher; Bacon, Jeffrey; Durham, Matthew; Morley, Deborah; Plaud-Ramos, Kenie; Poulson, Daniel; Baker, Diane; Bonneville, Alain; Kouzes, Richard

    2017-05-01

    We present an underground cosmic ray muon tomographic experiment imaging 3D density of overburden, part of a joint study with differential gravity. Muon data were acquired at four locations within a tunnel beneath Los Alamos, New Mexico, and used in a 3D tomographic inversion to recover the spatial variation in the overlying rock-air interface, and compared with a priori knowledge of the topography. Densities obtained exhibit good agreement with preliminary results of the gravity modeling, which will be presented elsewhere, and are compatible with values reported in the literature. The modeled rock-air interface matches that obtained from LIDAR within 4 m, our resolution, over much of the model volume. This experiment demonstrates the power of cosmic ray muons to image shallow geological targets using underground detectors, whose development as borehole devices will be an important new direction of passive geophysical imaging.

  20. Applications of Collisional Radiative Modeling of Helium and Deuterium for Image Tomography Diagnostic of Te, Ne, and ND in the DIII-D Tokamak

    NASA Astrophysics Data System (ADS)

    Munoz Burgos, J. M.; Brooks, N. H.; Fenstermacher, M. E.; Meyer, W. H.; Unterberg, E. A.; Schmitz, O.; Loch, S. D.; Balance, C. P.

    2011-10-01

    We apply new atomic modeling techniques to helium and deuterium for diagnostics in the divertor and scrape-off layer regions. Analysis of tomographically inverted images is useful for validating detachment prediction models and power balances in the divertor. We apply tomographic image inversion from fast tangential cameras of helium and Dα emission at the divertor in order to obtain 2D profiles of Te, Ne, and ND (neutral ion density profiles). The accuracy of the atomic models for He I will be cross-checked against Thomson scattering measurements of Te and Ne. This work summarizes several current developments and applications of atomic modeling into diagnostic at the DIII-D tokamak. Supported in part by the US DOE under DE-AC05-06OR23100, DE-FC02-04ER54698, DE-AC52-07NA27344, and DE-AC05-00OR22725.

  1. Development of a GNSS water vapour tomography system using algebraic reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Bender, Michael; Dick, Galina; Ge, Maorong; Deng, Zhiguo; Wickert, Jens; Kahle, Hans-Gert; Raabe, Armin; Tetzlaff, Gerd

    2011-05-01

    A GNSS water vapour tomography system developed to reconstruct spatially resolved humidity fields in the troposphere is described. The tomography system was designed to process the slant path delays of about 270 German GNSS stations in near real-time with a temporal resolution of 30 min, a horizontal resolution of 40 km and a vertical resolution of 500 m or better. After a short introduction to the GPS slant delay processing the framework of the GNSS tomography is described in detail. Different implementations of the iterative algebraic reconstruction techniques (ART) used to invert the linear inverse problem are discussed. It was found that the multiplicative techniques (MART) provide the best results with least processing time, i.e., a tomographic reconstruction of about 26,000 slant delays on a 8280 cell grid can be obtained in less than 10 min. Different iterative reconstruction techniques are compared with respect to their convergence behaviour and some numerical parameters. The inversion can be considerably stabilized by using additional non-GNSS observations and implementing various constraints. Different strategies for initialising the tomography and utilizing extra information are discussed. At last an example of a reconstructed field of the wet refractivity is presented and compared to the corresponding distribution of the integrated water vapour, an analysis of a numerical weather model (COSMO-DE) and some radiosonde profiles.

  2. Ultrasonic multi-skip tomography for pipe inspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volker, Arno; Zon, Tim van

    The inspection of wall loss corrosion is difficult at pipe supports due to limited accessibility. The recently developed ultrasonic Multi-Skip screening technique is suitable for this problem. The method employs ultrasonic transducers in a pitch-catch geometry positioned on opposite sides of the pipe support. Shear waves are transmitted in the axial direction within the pipe wall, reflecting multiple times between the inner and outer surfaces before reaching the receivers. Along this path, the signals accumulate information on the integral wall thickness (e.g., via variations in travel time). The method is very sensitive in detecting the presence of wall loss, butmore » it is difficult to quantify both the extent and depth of the loss. Multi-skip tomography has been developed to reconstruct the wall thickness profile along the axial direction of the pipe. The method uses model-based full wave field inversion; this consists of a forward model for predicting the measured wave field and an iterative process that compares the predicted and measured wave fields and minimizes the differences with respect to the model parameters (i.e., the wall thickness profile). Experimental results are very encouraging. Various defects (slot and flat bottom hole) are reconstructed using the tomographic inversion. The general shape and width are well recovered. The current sizing accuracy is in the order of 1 mm.« less

  3. Seismic structure of the upper crust in the Albertine Rift from travel-time and ambient-noise tomography - a comparison

    NASA Astrophysics Data System (ADS)

    Jakovlev, Andrey; Kaviani, Ayoub; Ruempker, Georg

    2017-04-01

    Here we present results of the investigation of the upper crust in the Albertine rift around the Rwenzori Mountains. We use a data set collected from a temporary network of 33 broadband stations operated by the RiftLink research group between September 2009 and August 2011. During this period, 82639 P-wave and 73408 S-wave travel times from 12419 local and regional earthquakes were registered. This presents a very rare opportunity to apply both local travel-time and ambient-noise tomography to analyze data from the same network. For the local travel-time tomographic inversion the LOTOS algorithm (Koulakov, 2009) was used. The algorithm performs iterative simultaneous inversions for 3D models of P- and S-velocity anomalies in combination with earthquake locations and origin times. 28955 P- and S-wave picks from 2769 local earthquakes were used. To estimate the resolution and stability of the results a number of the synthetic and real data tests were performed. To perform the ambient noise tomography we use the following procedure. First, we follow the standard procedure described by Bensen et al. (2007) as modified by Boué et al. (2014) to compute the vertical component cross-correlation functions between all pairs of stations. We also adapted the algorithm introduced by Boué et al. (2014) and use the WHISPER software package (Briand et al., 2013) to preprocess individual daily vertical-component waveforms. On the next step, for each period, we use the method of Barmin et al. (2001) to invert the dispersion measurements along each path for group velocity tomographic maps. Finally, we adapt a modified version of the algorithm suggested by Macquet et al. (2014) to invert the group velocity maps for shear velocity structure. We apply several tests, which show that the best resolution is obtained at a period of 8 seconds, which correspond to a depth of approximately 6 km. Models of the seismic structure obtained by the two methods agree well at shallow depth of about 5 km Low velocities surround the mountain range from western and southern sides and coincide with the location of the rift valley. The Rwenzori Mountains itself and the eastern rift shoulder are represented by increased velocities. At greater depths of 10 - 15 km some differences in the models care observed. Thus, beneath the Rwenzories the travel time tomography shows low S-velocities, whereas the ambient noise tomography exhibits high S-velocities. This can be possibly explained by the fact that the ambient noise tomography is characterized by higher vertical resolution. Also, the number of the rays used for tomographic inversion in the ambient noise tomography is significantly smaller. This study was partly supported by the grant of Russian Foundation of Science #14-17-00430. References: Barmin, M.P., Ritzwoller, M.H. & Levshin, A.L., 2001. A fast and reliable method for surface wave tomography, Pure appl. Geophys., 158, 1351-1375. Bensen G.D., Ritzwoller M.H., Barmin M.P., Levshin A.L., Lin F., Moschetti M.P., Shapiro N.M., Yang Y., 2001, A fast and reliable method for surface wave tomography. Geophys. J. Int. 169, 1239-1260, doi: 10.1111/j.1365-246X.2007.03374.x. Boué P., Poli P., Campillo M., Roux P., 2014, Reverberations, coda waves and ambient-noise: correlations at the global scale and retrieval of the deep phases. Earth planet. Sci. Lett., 391, 137-145. Briand X., Campillo M., Brenguier F., Boué P., Poli P., Roux P., Takeda T. AGU Fall Meeting. San Francisco, CA; 2013. Processing of terabytes of data for seismic noise analysis with the Python codes of the Whisper Suite. 9-13 December, in Proceedings of the , Abstract n°IN51B-1544. Koulakov, I. (2009), LOTOS code for local earthquake tomographic inversion. Benchmarks for testing tomographic algorithms, Bull. Seismol. Soc. Am., 99, 194-214, doi:10.1785/0120080013.

  4. First results from a full-waveform inversion of the African continent using Salvus

    NASA Astrophysics Data System (ADS)

    van Herwaarden, D. P.; Afanasiev, M.; Krischer, L.; Trampert, J.; Fichtner, A.

    2017-12-01

    We present the initial results from an elastic full-waveform inversion (FWI) of the African continent which is melded together within the framework of the Collaborative Seismic Earth Model (CSEM) project. The continent of Africa is one of the most geophysically interesting regions on the planet. More specifically, Africa contains the Afar Depression, which is the only place on Earth where incipient seafloor spreading is sub-aerially exposed, along with other anomalous features such as the topography in the south, and several smaller surface expressions such as the Cameroon Volcanic Line and Congo Basin. Despite its significance, relatively few tomographic images exist of Africa, and, as a result, the debate on the geophysical origins of Africa's anomalies is rich and ongoing. Tomographic images of Africa present unique challenges due to uneven station coverage: while tectonically active areas such as the Afar rift are well sampled, much of the continent exhibits a severe lack of seismic stations. And, while Africa is mostly surrounded by tectonically active spreading plate boundaries, the interior of the continent is seismically quiet. To mitigate such issues, our simulation domain is extended to include earthquakes occurring in the South Atlantic and along the western edge of South America. Waveform modelling and inversion is performed using Salvus, a flexible and high-performance software suite based on the spectral-element method. Recently acquired recordings from the AfricaArray and NARS seismic networks are used to complement data obtained from global networks. We hope that this new model presents a fresh high-resolution image of African geodynamic structure, and helps advance the debate regarding the causative mechanisms of its surface anomalies.

  5. Tomographic Imaging of the Suns Interior

    NASA Technical Reports Server (NTRS)

    Kosovichev, A. G.

    1996-01-01

    A new method is presented of determining the three-dimensional sound-speed structure and flow velocities in the solar convection zone by inversion of the acoustic travel-time data recently obtained by Duvall and coworkers. The initial inversion results reveal large-scale subsurface structures and flows related to the active regions, and are important for understanding the physics of solar activity and large-scale convection. The results provide evidence of a zonal structure below the surface in the low-latitude area of the magnetic activity. Strong converging downflows, up to 1.2 km/s, and a substantial excess of the sound speed are found beneath growing active regions. In a decaying active region, there is evidence for the lower than average sound speed and for upwelling of plasma.

  6. Comparison of three methods of solution to the inverse problem of groundwater hydrology for multiple pumping stimulation

    NASA Astrophysics Data System (ADS)

    Giudici, Mauro; Casabianca, Davide; Comunian, Alessandro

    2015-04-01

    The basic classical inverse problem of groundwater hydrology aims at determining aquifer transmissivity (T ) from measurements of hydraulic head (h), estimates or measures of source terms and with the least possible knowledge on hydraulic transmissivity. The theory of inverse problems shows that this is an example of ill-posed problem, for which non-uniqueness and instability (or at least ill-conditioning) might preclude the computation of a physically acceptable solution. One of the methods to reduce the problems with non-uniqueness, ill-conditioning and instability is a tomographic approach, i.e., the use of data corresponding to independent flow situations. The latter might correspond to different hydraulic stimulations of the aquifer, i.e., to different pumping schedules and flux rates. Three inverse methods have been analyzed and tested to profit from the use of multiple sets of data: the Differential System Method (DSM), the Comparison Model Method (CMM) and the Double Constraint Method (DCM). DSM and CMM need h all over the domain and thus the first step for their application is the interpolation of measurements of h at sparse points. Moreover, they also need the knowledge of the source terms (aquifer recharge, well pumping rates) all over the aquifer. DSM is intrinsically based on the use of multiple data sets, which permit to write a first-order partial differential equation for T , whereas CMM and DCM were originally proposed to invert a single data set and have been extended to work with multiple data sets in this work. CMM and DCM are based on Darcy's law, which is used to update an initial guess of the T field with formulas based on a comparison of different hydraulic gradients. In particular, the CMM algorithm corrects the T estimate with ratio of the observed hydraulic gradient and that obtained with a comparison model which shares the same boundary conditions and source terms as the model to be calibrated, but a tentative T field. On the other hand the DCM algorithm applies the ratio of the hydraulic gradients obtained for two different forward models, one with the same boundary conditions and source terms as the model to be calibrated and the other one with prescribed head at the positions where in- or out-flow is known and h is measured. For DCM and CMM, multiple stimulation is used by updating the T field separately for each data set and then combining the resulting updated fields with different possible statistics (arithmetic, geometric or harmonic mean, median, least change, etc.). The three algorithms are tested and their characteristics and results are compared with a field data set, which was provided by prof. Fritz Stauffer (ETH) and corresponding to a pumping test in a thin alluvial aquifer in northern Switzerland. Three data sets are available and correspond to the undisturbed state, to the flow field created by a single pumping well and to the situation created by an 'hydraulic dipole', i.e., an extraction and an injection wells. These data sets permit to test the three inverse methods and the different options which can be chosen for their use.

  7. Limited data tomographic image reconstruction via dual formulation of total variation minimization

    NASA Astrophysics Data System (ADS)

    Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong

    2011-03-01

    The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.

  8. Seismic Calibration of Group 1 IMS Stations in Eastern Asia for Improved IDC Event Location

    DTIC Science & Technology

    2006-04-01

    database has been assembled and delivered to the SMR (formerly CMR) Research and Development Support Services (RDSS) data archive. This database ...Data used in these tomographic inversions have been collected into a uniform database and delivered to the RDSS at the SMR. Extensive testing of these...complex 3-D velocity models is based on a finite difference approximation to the eikonal equation developed by Podvin and Lecomte (1 991) and

  9. An observational study on the Strength and Movement of EIA in the Indian zone - Results from the Indian Tomography Experiment (CRABEX)

    NASA Astrophysics Data System (ADS)

    Thampi, S. V.; Devasia, C. V.; Ravindran, S.; Pant, T. K.; Sridharan, R.

    To investigate the equatorial ionospheric processes like the Equatorial Ionization Anomaly (EIA) and Equatorial Spread F and their inter relationships, a network of five stations receiving the 150 and 400 MHz transmissions from the Low Earth Orbiting Satellites (LEOs) covering the region from Trivandrum (8.5°N, Dip ˜0.3N°) to New Delhi (28°N, Dip ˜20°N) is set up along the 77-78°E longitude. The receivers measure the relative phase of 150 MHz with respect to 400 MHz, which is proportional to the slant relative Total Electron Content (TEC) along the line of sight. These simultaneous TEC measurements are inverted to obtain the tomographic image of the latitudinal distribution of electron densities in the meridional plane. The inversion is done using the Algebraic Reconstruction Technique (ART). In this paper, the tomographic images of the equatorial ionosphere along the 77-78° E meridians are presented. The images indicate the movement of the anomaly crest, as well as the strength of EIA at various local times, which in turn control the over all electrodynamics of the evening time ionosphere, favoring the occurrence of Equatorial Spread F (ESF) irregularities. These features are discussed in detail under varying geophysical conditions. The results of the sensitivity analysis of the inversion algorithm using model ionospheres are also presented.

  10. Preliminary result of P-wave speed tomography beneath North Sumatera region

    NASA Astrophysics Data System (ADS)

    Jatnika, Jajat; Nugraha, Andri Dian; Wandono

    2015-04-01

    The structure of P-wave speed beneath the North Sumatra region was determined using P-wave arrival times compiled by MCGA from time periods of January 2009 to December 2012 combining with PASSCAL data for February to May 1995. In total, there are 2,246 local earthquake events with 10,666 P-wave phases from 63 stations seismic around the study area. Ray tracing to estimate travel time from source to receiver in this study by applying pseudo-bending method while the damped LSQR method was used for the tomographic inversion. Based on assessment of ray coverage, earthquakes and stations distribution, horizontal grid nodes was set up of 30×30 km2 for inside the study area and 80×80 km2 for outside the study area. The tomographic inversion results show low Vp anomaly beneath Toba caldera complex region and around the Sumatra Fault Zones (SFZ). These features are consistent with previous study. The low Vp anomaly beneath Toba caldera complex are observed around Mt. Pusuk Bukit at depths of 5 km down to 100 km. The interpretation is these anomalies may be associated with ascending hot materials from subduction processes at depths of 80 km down to 100 km. The obtained Vp structure from local tomography will give valuable information to enhance understanding of tectonic and volcanic in this study area.

  11. Preliminary result of P-wave speed tomography beneath North Sumatera region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jatnika, Jajat; Indonesian Meteorological, Climatological and Geophysical Agency; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    2015-04-24

    The structure of P-wave speed beneath the North Sumatra region was determined using P-wave arrival times compiled by MCGA from time periods of January 2009 to December 2012 combining with PASSCAL data for February to May 1995. In total, there are 2,246 local earthquake events with 10,666 P-wave phases from 63 stations seismic around the study area. Ray tracing to estimate travel time from source to receiver in this study by applying pseudo-bending method while the damped LSQR method was used for the tomographic inversion. Based on assessment of ray coverage, earthquakes and stations distribution, horizontal grid nodes was setmore » up of 30×30 km2 for inside the study area and 80×80 km2 for outside the study area. The tomographic inversion results show low Vp anomaly beneath Toba caldera complex region and around the Sumatra Fault Zones (SFZ). These features are consistent with previous study. The low Vp anomaly beneath Toba caldera complex are observed around Mt. Pusuk Bukit at depths of 5 km down to 100 km. The interpretation is these anomalies may be associated with ascending hot materials from subduction processes at depths of 80 km down to 100 km. The obtained Vp structure from local tomography will give valuable information to enhance understanding of tectonic and volcanic in this study area.« less

  12. Evidence for the contemporary magmatic system beneath Long Valley Caldera from local earthquake tomography and receiver function analysis

    USGS Publications Warehouse

    Seccia, D.; Chiarabba, C.; De Gori, P.; Bianchi, I.; Hill, D.P.

    2011-01-01

    We present a new P wave and S wave velocity model for the upper crust beneath Long Valley Caldera obtained using local earthquake tomography and receiver function analysis. We computed the tomographic model using both a graded inversion scheme and a traditional approach. We complement the tomographic I/P model with a teleseismic receiver function model based on data from broadband seismic stations (MLAC and MKV) located on the SE and SW margins of the resurgent dome inside the caldera. The inversions resolve (1) a shallow, high-velocity P wave anomaly associated with the structural uplift of a resurgent dome; (2) an elongated, WNW striking low-velocity anomaly (8%–10 % reduction in I/P) at a depth of 6 km (4 km below mean sea level) beneath the southern section of the resurgent dome; and (3) a broad, low-velocity volume (–5% reduction in I/P and as much as 40% reduction in I/S) in the depth interval 8–14 km (6–12 km below mean sea level) beneath the central section of the caldera. The two low-velocity volumes partially overlap the geodetically inferred inflation sources that drove uplift of the resurgent dome associated with caldera unrest between 1980 and 2000, and they likely reflect the ascent path for magma or magmatic fluids into the upper crust beneath the caldera.

  13. Bayesian ionospheric multi-instrument 3D tomography

    NASA Astrophysics Data System (ADS)

    Norberg, Johannes; Vierinen, Juha; Roininen, Lassi

    2017-04-01

    The tomographic reconstruction of ionospheric electron densities is an inverse problem that cannot be solved without relatively strong regularising additional information. % Especially the vertical electron density profile is determined predominantly by the regularisation. % %Often utilised regularisations in ionospheric tomography include smoothness constraints and iterative methods with initial ionospheric models. % Despite its crucial role, the regularisation is often hidden in the algorithm as a numerical procedure without physical understanding. % % The Bayesian methodology provides an interpretative approach for the problem, as the regularisation can be given in a physically meaningful and quantifiable prior probability distribution. % The prior distribution can be based on ionospheric physics, other available ionospheric measurements and their statistics. % Updating the prior with measurements results as the posterior distribution that carries all the available information combined. % From the posterior distribution, the most probable state of the ionosphere can then be solved with the corresponding probability intervals. % Altogether, the Bayesian methodology provides understanding on how strong the given regularisation is, what is the information gained with the measurements and how reliable the final result is. % In addition, the combination of different measurements and temporal development can be taken into account in a very intuitive way. However, a direct implementation of the Bayesian approach requires inversion of large covariance matrices resulting in computational infeasibility. % In the presented method, Gaussian Markov random fields are used to form a sparse matrix approximations for the covariances. % The approach makes the problem computationally feasible while retaining the probabilistic and physical interpretation. Here, the Bayesian method with Gaussian Markov random fields is applied for ionospheric 3D tomography over Northern Europe. % Multi-instrument measurements are utilised from TomoScand receiver network for Low Earth orbit beacon satellite signals, GNSS receiver networks, as well as from EISCAT ionosondes and incoherent scatter radars. % %The performance is demonstrated in three-dimensional spatial domain with temporal development also taken into account.

  14. The inverse problem of refraction travel times, part II: Quantifying refraction nonuniqueness using a three-layer model

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.

    2005-01-01

    This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.

  15. Seismic imaging: From classical to adjoint tomography

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Gu, Y. J.

    2012-09-01

    Seismic tomography has been a vital tool in probing the Earth's internal structure and enhancing our knowledge of dynamical processes in the Earth's crust and mantle. While various tomographic techniques differ in data types utilized (e.g., body vs. surface waves), data sensitivity (ray vs. finite-frequency approximations), and choices of model parameterization and regularization, most global mantle tomographic models agree well at long wavelengths, owing to the presence and typical dimensions of cold subducted oceanic lithospheres and hot, ascending mantle plumes (e.g., in central Pacific and Africa). Structures at relatively small length scales remain controversial, though, as will be discussed in this paper, they are becoming increasingly resolvable with the fast expanding global and regional seismic networks and improved forward modeling and inversion techniques. This review paper aims to provide an overview of classical tomography methods, key debates pertaining to the resolution of mantle tomographic models, as well as to highlight recent theoretical and computational advances in forward-modeling methods that spearheaded the developments in accurate computation of sensitivity kernels and adjoint tomography. The first part of the paper is devoted to traditional traveltime and waveform tomography. While these approaches established a firm foundation for global and regional seismic tomography, data coverage and the use of approximate sensitivity kernels remained as key limiting factors in the resolution of the targeted structures. In comparison to classical tomography, adjoint tomography takes advantage of full 3D numerical simulations in forward modeling and, in many ways, revolutionizes the seismic imaging of heterogeneous structures with strong velocity contrasts. For this reason, this review provides details of the implementation, resolution and potential challenges of adjoint tomography. Further discussions of techniques that are presently popular in seismic array analysis, such as noise correlation functions, receiver functions, inverse scattering imaging, and the adaptation of adjoint tomography to these different datasets highlight the promising future of seismic tomography.

  16. Wide angle reflection effects on the uncertainty in layered models travel times tomography

    NASA Astrophysics Data System (ADS)

    Majdanski, Mariusz; Bialas, Sebastian; Trzeciak, Maciej; Gaczyński, Edward; Maksym, Andrzej

    2015-04-01

    Multi-phase layered model traveltimes tomography inversions can be realised in several ways depending on the inversion path. Inverting the shape of the boundaries based on reflection data and the velocity field based on refractions could be done jointly or sequentially. We analyse an optimal inversion path based on the uncertainty analysis of the final models. Additionally, we propose to use post critical wide-angle reflections in tomographic inversions for more reliable results especially in the deeper parts of each layer. We focus on the effects of using hard to pick post critical reflections on the final model uncertainty. Our study is performed using data collected during standard vibroseis and explosive sources seismic reflection experiment focused on shale gas reservoir characterisation realised by Polish Oil and Gas Company. Our data were gathered by a standalone single component stations deployed along the whole length of the 20 km long profile, resulting in significantly longer offsets. Our piggy back recordings resulted in good quality wide angle refraction and reflection recordings clearly observable up to the offsets of 12 km.

  17. The Formation of Laurentia: Evidence from Shear Wave Splitting and Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Liddell, M. V.; Bastow, I. D.; Rawlinson, N.; Darbyshire, F. A.; Gilligan, A.

    2017-12-01

    The northern Hudson Bay region of Canada comprises several Archean cratonic nuclei, assembled by Paleoproterozoic orogenies including the 1.8 Ga Trans-Hudson Orogen (THO) and Rinkian-Nagssugtoqidian Orogen (NO). Questions remain about how similar in scale and nature these orogens were compared to modern orogens like the Himalayas. Also in question is whether the thick Laurentian cratonic root below Hudson Bay is stratified, with a seismically-fast Archean core underlain by a lower, younger, thermal layer. We investigate these problems via shear-wave splitting and teleseismic tomography using up to 25 years of data from 65 broadband seismic stations across northern Hudson Bay. The results of the complementary studies comprise the most comprehensive study to date of mantle seismic velocity and anisotropy in northern Laurentia. Splitting parameter patterns are used to interpret multiple layers, lithospheric boundaries, dipping anisotropy, and deformation zone limits for the THO and NO. Source-side waveguide effects from Japan and the Aleutian trench are observed despite the tomographic data being exclusively relative arrival time. Mitigating steps to ensure data quality are explained and enforced. In the Hudson Strait, anisotropic fast directions (φ) generally parallel the THO, which appears in tomographic images as a strong low velocity feature relative to the neighbouring Archean cratons. Several islands in northern Hudson Bay show short length-scale changes in φ coincident with strong velocity contrasts. These are interpreted as distinct lithospheric blocks with unique deformational histories, and point to a complex, rather than simple 2-plate, collisional history for the THO. Strong evidence is presented for multiple anisotropic layers beneath Archean zones, consistent with the episodic development model of cratonic keels (e.g., Yuan & Romanowicz 2010). We show via both tomographic inversion models and SKS splitting patterns that southern Baffin Island was underthrust by the Superior plate; slow wavespeed material underlies this region, and modelling of SKS splitting patterns indicates a dipping anisotropic layer. This aligns our most up-to-date geophysical results with recent geological evidence (Weller et al., 2017) that the THO developed with modern plate-tectonic style interactions.

  18. A high-throughput system for high-quality tomographic reconstruction of large datasets at Diamond Light Source

    PubMed Central

    Atwood, Robert C.; Bodey, Andrew J.; Price, Stephen W. T.; Basham, Mark; Drakopoulos, Michael

    2015-01-01

    Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an ‘orthogonal’ fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and ‘facility-independent’: it can run on standard cluster infrastructure at any institution. PMID:25939626

  19. Derivation of site-specific relationships between hydraulic parameters and p-wave velocities based on hydraulic and seismic tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brauchler, R.; Doetsch, J.; Dietrich, P.

    2012-01-10

    In this study, hydraulic and seismic tomographic measurements were used to derive a site-specific relationship between the geophysical parameter p-wave velocity and the hydraulic parameters, diffusivity and specific storage. Our field study includes diffusivity tomograms derived from hydraulic travel time tomography, specific storage tomograms, derived from hydraulic attenuation tomography, and p-wave velocity tomograms, derived from seismic tomography. The tomographic inversion was performed in all three cases with the SIRT (Simultaneous Iterative Reconstruction Technique) algorithm, using a ray tracing technique with curved trajectories. The experimental set-up was designed such that the p-wave velocity tomogram overlaps the hydraulic tomograms by half. Themore » experiments were performed at a wellcharacterized sand and gravel aquifer, located in the Leine River valley near Göttingen, Germany. Access to the shallow subsurface was provided by direct-push technology. The high spatial resolution of hydraulic and seismic tomography was exploited to derive representative site-specific relationships between the hydraulic and geophysical parameters, based on the area where geophysical and hydraulic tests were performed. The transformation of the p-wave velocities into hydraulic properties was undertaken using a k-means cluster analysis. Results demonstrate that the combination of hydraulic and geophysical tomographic data is a promising approach to improve hydrogeophysical site characterization.« less

  20. TomoPhantom, a software package to generate 2D-4D analytical phantoms for CT image reconstruction algorithm benchmarks

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.

    2018-01-01

    In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.

  1. Towards full waveform ambient noise inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas

    2018-01-01

    In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure are quantified using Hessian-vector products.

  2. Full-waveform inversion for the Iranian plateau

    NASA Astrophysics Data System (ADS)

    Masouminia, N.; Fichtner, A.; Rahimi, H.

    2017-12-01

    We aim to obtain a detailed tomographic model for the Iranian plateau facilitated by full-waveform inversion. By using this method, we intend to better constrain the 3-D structure of the crust and the upper mantle in the region. The Iranian plateau is a complex tectonic area resulting from the collision of the Arabian and Eurasian tectonic plates. This region is subject to complex tectonic processes such as Makran subduction zone, which runs along the southeastern coast of Iran, and the convergence of the Arabian and- Eurasian plates, which itself led to another subduction under Central Iran. This continent-continent collision has also caused shortening and crustal thickening, which can be seen today as Zagros mountain range in the south and Kopeh Dagh mountain range in the northeast. As a result of such a tectonic activity, the crust and the mantle beneath the region are expected to be highly heterogeneous. To further our understanding of the region and its tectonic history, a detailed 3-D velocity model is required.To construct a 3-D model, we propose to use full-waveform inversion, which allows us to incorporate all types of waves recorded in the seismogram, including body waves as well as fundamental- and higher-mode surface waves. Exploiting more information from the observed data using this approach is likely to constrain features which have not been found by classical tomography studies so far. We address the forward problem using Salvus - a numerical wave propagation solver, based on spectral-element method and run on high-performance computers. The solver allows us to simulate wave field propagating in highly heterogeneous, attenuating and anisotropic media, respecting the surface topography. To improve the model, we solve the optimization problem. Solution of this optimization problem is based on an iterative approach which employs adjoint methods to calculate the gradient and uses steepest descent and conjugate-gradient methods to minimize the objective function. Each iteration of such an approach is expected to bring the model closer to the true model.Our model domain extends between 25°N and 40°N in latitude and 42°E and 63°E in longitude. To constrain the 3-D structure of the area we use 83 broadband seismic stations and 146 earthquakes with magnitude Mw>4.5 -that occurred in the region between 2012 and 2017.

  3. Fine-scale structure of the San Andreas fault zone and location of the SAFOD target earthquakes

    USGS Publications Warehouse

    Thurber, C.; Roecker, S.; Zhang, H.; Baher, S.; Ellsworth, W.

    2004-01-01

    We present results from the tomographic analysis of seismic data from the Parkfield area using three different inversion codes. The models provide a consistent view of the complex velocity structure in the vicinity of the San Andreas, including a sharp velocity contrast across the fault. We use the inversion results to assess our confidence in the absolute location accuracy of a potential target earthquake. We derive two types of accuracy estimates, one based on a consideration of the location differences from the three inversion methods, and the other based on the absolute location accuracy of "virtual earthquakes." Location differences are on the order of 100-200 m horizontally and up to 500 m vertically. Bounds on the absolute location errors based on the "virtual earthquake" relocations are ??? 50 m horizontally and vertically. The average of our locations places the target event epicenter within about 100 m of the SAF surface trace. Copyright 2004 by the American Geophysical Union.

  4. Applications of hybrid genetic algorithms in seismic tomography

    NASA Astrophysics Data System (ADS)

    Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos

    2011-11-01

    Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.

  5. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  6. Numerical methods for the inverse problem of density functional theory

    DOE PAGES

    Jensen, Daniel S.; Wasserman, Adam

    2017-07-17

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  7. Numerical methods for the inverse problem of density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Daniel S.; Wasserman, Adam

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  8. Towards Seismic Tomography Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Liu, Q.; Tape, C.; Maggi, A.

    2006-12-01

    We outline the theory behind tomographic inversions based on 3D reference models, fully numerical 3D wave propagation, and adjoint methods. Our approach involves computing the Fréchet derivatives for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a spectral-element method (SEM) and a heterogeneous wave-speed model, and stored as synthetic seismograms at particular receivers for which there is data. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the differences between the data and the synthetics are time reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernel. These kernels may be thought of as weighted sums of measurement-specific banana-donut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. A conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. Using 2D examples for Rayleigh wave phase-speed maps of southern California, we illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions, and joint source-structure inversions. We also illustrate the characteristics of these 3D finite-frequency kernels based upon adjoint simulations for a variety of global arrivals, e.g., Pdiff, P'P', and SKS, and we illustrate how the approach may be used to investigate body- and surface-wave anisotropy. In adjoint tomography any time segment in which the data and synthetics match reasonably well is suitable for measurement, and this implies a much greater number of phases per seismogram can be used compared to classical tomography in which the sensitivity of the measurements is determined analytically for specific arrivals, e.g., P. We use an automated picking algorithm based upon short-term/long-term averages and strict phase and amplitude anomaly criteria to determine arrivals and time windows suitable for measurement. For shallow global events the algorithm typically identifies of the order of 1000~windows suitable for measurement, whereas for a deep event the number can reach 4000. For southern California earthquakes the number of phases is of the order of 100 for a magnitude 4.0 event and up to 450 for a magnitude 5.0 event. We will show examples of event kernels for both global and regional earthquakes. These event kernels form the basis of adjoint tomography.

  9. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  10. Intensity-enhanced MART for tomographic PIV

    NASA Astrophysics Data System (ADS)

    Wang, HongPing; Gao, Qi; Wei, RunJie; Wang, JinJun

    2016-05-01

    A novel technique to shrink the elongated particles and suppress the ghost particles in particle reconstruction of tomographic particle image velocimetry is presented. This method, named as intensity-enhanced multiplicative algebraic reconstruction technique (IntE-MART), utilizes an inverse diffusion function and an intensity suppressing factor to improve the quality of particle reconstruction and consequently the precision of velocimetry. A numerical assessment about vortex ring motion with and without image noise is performed to evaluate the new algorithm in terms of reconstruction, particle elongation and velocimetry. The simulation is performed at seven different seeding densities. The comparison of spatial filter MART and IntE-MART on the probability density function of particle peak intensity suggests that one of the local minima of the distribution can be used to separate the ghosts and actual particles. Thus, ghost removal based on IntE-MART is also introduced. To verify the application of IntE-MART, a real plate turbulent boundary layer experiment is performed. The result indicates that ghost reduction can increase the accuracy of RMS of velocity field.

  11. Probing the Detailed Seismic Velocity Structure of Subduction Zones Using Advanced Seismic Tomography Methods

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Thurber, C. H.

    2005-12-01

    Subduction zones are one of the most important components of the Earth's plate tectonic system. Knowing the detailed seismic velocity structure within and around subducting slabs is vital to understand the constitution of the slab, the cause of intermediate depth earthquakes inside the slab, the fluid distribution and recycling, and tremor occurrence [Hacker et al., 2001; Obara, 2002].Thanks to the ability of double-difference tomography [Zhang and Thurber, 2003] to resolve the fine-scale structure near the source region and the favorable seismicity distribution inside many subducting slabs, it is now possible to characterize the fine details of the velocity structure and earthquake locations inside the slab, as shown in the study of the Japan subduction zone [Zhang et al., 2004]. We further develop the double-difference tomography method in two aspects: the first improvement is to use an adaptive inversion mesh rather than a regular inversion grid and the second improvement is to determine a reliable Vp/Vs structure using various strategies rather than directly from Vp and Vs [see our abstract ``Strategies to solve for a better Vp/Vs model using P and S arrival time'' at Session T29]. The adaptive mesh seismic tomography method is based on tetrahedral diagrams and can automatically adjust the inversion mesh according to the ray distribution so that the inversion mesh nodes are denser where there are more rays and vice versa [Zhang and Thurber, 2005]. As a result, the number of inversion mesh nodes is greatly reduced compared to a regular inversion grid with comparable spatial resolution, and the tomographic system is more stable and better conditioned. This improvement is quite valuable for characterizing the fine structure of the subduction zone considering the highly uneven distribution of earthquakes within and around the subducting slab. The second improvement, to determine a reliable Vp/Vs model, lies in jointly inverting Vp, Vs, and Vp/Vs using P, S, and S-P times in a manner similar to double-difference tomography. Obtaining a reliable Vp/Vs model of the subduction zone is more helpful for understanding its mechanical and petrologic properties. Our applications of the original version of double-difference tomography to several subduction zones beneath northern Honshu, Japan, the Wellington region, New Zealand, and Alaska, United States, have shown evident velocity variations within and around the subducting slab, which likely is evidence of dehydration reactions of various hydrous minerals that are hypothesized to be responsible for intermediate depth earthquakes. We will show the new velocity models for these subduction zones by applying our advanced tomographic methods.

  12. Canopy Height and Vertical Structure from Multibaseline Polarimetric InSAR: First Results of the 2016 NASA/ESA AfriSAR Campaign

    NASA Astrophysics Data System (ADS)

    Lavalle, M.; Hensley, S.; Lou, Y.; Saatchi, S. S.; Pinto, N.; Simard, M.; Fatoyinbo, T. E.; Duncanson, L.; Dubayah, R.; Hofton, M. A.; Blair, J. B.; Armston, J.

    2016-12-01

    In this paper we explore the derivation of canopy height and vertical structure from polarimetric-interferometric SAR (PolInSAR) data collected during the 2016 AfriSAR campaign in Gabon. AfriSAR is a joint effort between NASA and ESA to acquire multi-baseline L- and P-band radar data, lidar data and field data over tropical forests and savannah sites to support calibration, validation and algorithm development in preparation for the NISAR, GEDI and BIOMASS missions. Here we focus on the L-band UAVSAR dataset acquired over the Lope National Park in Central Gabon to demonstrate mapping of canopy height and vertical structure using PolInSAR and tomographic techniques. The Lope site features a natural gradient of forest biomass from the forest-savanna boundary (< 100 Mg/ha) to dense undisturbed humid tropical forests (> 400 Mg/ha). Our dataset includes 9 long-baseline, full-polarimetric UAVSAR acquisitions along with field and lidar data from the Laser Vegetation Ice Sensor (LVIS). We first present a brief theoretical background of the PolInSAR and tomographic techniques. We then show the results of our PolInSAR algorithms to create maps of canopy height generated via inversion of the random-volume-over-ground (RVOG) and random-motion-over-ground (RVoG) models. In our approach multiple interferometric baselines are merged incoherently to maximize the interferometric sensitivity over a broad range of tree heights. Finally we show how traditional tomographic algorithms are used for the retrieval of the full vertical canopy profile. We compare our results from the different PolInSAR/tomographic algorithms to validation data derived from lidar and field data.

  13. The upper mantle structure of the central Rio Grande rift region from teleseismic P and S wave travel time delays and attenuation

    USGS Publications Warehouse

    Slack, P.D.; Davis, P.M.; Baldridge, W.S.; Olsen, K.H.; Glahn, A.; Achauer, U.; Spence, W.

    1996-01-01

    The lithosphere beneath a continental rift should be significantly modified due to extension. To image the lithosphere beneath the Rio Grande rift (RGR), we analyzed teleseismic travel time delays of both P and S wave arrivals and solved for the attenuation of P and S waves for four seismic experiments spanning the Rio Grande rift. Two tomographic inversions of the P wave travel time data are given: an Aki-Christofferson-Husebye (ACH) block model inversion and a downward projection inversion. The tomographic inversions reveal a NE-SW to NNE-SSW trending feature at depths of 35 to 145 km with a velocity reduction of 7 to 8% relative to mantle velocities beneath the Great Plains. This region correlates with the transition zone between the Colorado Plateau and the Rio Grande rift and is bounded on the NW by the Jemez lineament, a N52??E trending zone of late Miocene to Holocene volcanism. S wave delays plotted against P wave delays are fit with a straight line giving a slope of 3.0??0.4. This correlation and the absolute velocity reduction imply that temperatures in the lithosphere are close to the solidus, consistent with, but not requiring, the presence of partial melt in the mantle beneath the Rio Grande rift. The attenuation data could imply the presence of partial melt. We compare our results with other geophysical and geologic data. We propose that any north-south trending thermal (velocity) anomaly that may have existed in the upper mantle during earlier (Oligocene to late Miocene) phases of rifting and that may have correlated with the axis of the rift has diminished with time and has been overprinted with more recent structure. The anomalously low-velocity body presently underlying the transition zone between the core of the Colorado Plateau and the rift may reflect processes resulting from the modern (Pliocene to present) regional stress field (oriented WNW-ESE), possibly heralding future extension across the Jemez lineament and transition zone.

  14. Close-up to the stimulation phase of a EGS geothermal site: mapping the time-evolution of the subsurface elastic parameters using a trans-dimensional Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Piana Agostinetti, Nicola; Calo', Marco

    2014-05-01

    Stimulation of geothermal wells through hydraulic injections is the most common way to increase secondary porosity in hot-dry rock geothermal reservoir. As worldwide documented, injection of over-pressurized fluids in the subsurface creates a diffuse pattern of microseismicity confined to the portion of crustal volume around the injection well. Such "pseudo"-natural seismicity can be a valuable source of information about the elastic properties of the rock in the volume directly below the geothermal site and about their time-evolution during fluid injection. Classical methods (e.g. Local Earthquake Tomography, LET) have been applied to image how the rocks interact with the flow of over-pressurized fluids. Repeating the LET computation using consecutive set of events produces a time-series of P-wave velocity models which can be analyzed to catch the time-variation of the elastic properties. Such approaches, based on a linearized solution of the tomographic inverse problem, can give a qualitative idea of the behavior of rocks, but they cannot be used to quantify such interaction, due to the well-know issues which affect LET results, like the strong link between the "final" and the "starting" model (i.e. the "final" model must be a small-perturbation of the the "starting" model), model paramterization, damping of the covariance matrix, etc.. Also, the robustness of the retrieved models can not be easily assessed due to the difficulties to determine the absolute errors on the Vp parameters themselves. Thus, it can be challenging to understand if the fluctuations in the elastic properties remain or not within the estimated errors. In this study we present the results of a full 4D local earthquake tomography obtained with the P- and S- wave arrival times of 600 seismic events recorded in 2000 during the stimulation of the GPK2 well of the Enhanced Geothermal System located in Soultz-des-Forestes (France). We focus on the initial stage, when the injection rate has been increased abruptly from 30 l/s to 40 l/s. Such operation lasted less than 13 hours and generated a large number of events, almost evenly time-distributed. Such stage has been analyzed in details using a linearized tomographic inversion code imroved with a post-processing (WAM) which highlighted the fluctuations in the Vp velocity near the well-head over a few hours time-scale and a few hundreds meter spatial-scale (Calo' et al, GJI, 2011). The approach adopted (LET+WAM) provided a rough estimation of the distribution errors in the models that resulted unsatifactory to assess the reliablity of some important velocity variations observed over the time. Solving the LET inverse problem using a trans-dimensional Monte Carlo method gives us now the possibility to fully quantify the errors associated with the retrieved Vp and Vp/Vs models and enable us to evaluate the robustness of the fluctuations in the elastic properties during the injection phase.

  15. Full seismic waveform tomography for upper-mantle structure in the Australasian region using adjoint methods

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; Kennett, Brian L. N.; Igel, Heiner; Bunge, Hans-Peter

    2009-12-01

    We present a full seismic waveform tomography for upper-mantle structure in the Australasian region. Our method is based on spectral-element simulations of seismic wave propagation in 3-D heterogeneous earth models. The accurate solution of the forward problem ensures that waveform misfits are solely due to as yet undiscovered Earth structure and imprecise source descriptions, thus leading to more realistic tomographic images and source parameter estimates. To reduce the computational costs, we implement a long-wavelength equivalent crustal model. We quantify differences between the observed and the synthetic waveforms using time-frequency (TF) misfits. Their principal advantages are the separation of phase and amplitude misfits, the exploitation of complete waveform information and a quasi-linear relation to 3-D Earth structure. Fréchet kernels for the TF misfits are computed via the adjoint method. We propose a simple data compression scheme and an accuracy-adaptive time integration of the wavefields that allows us to reduce the storage requirements of the adjoint method by almost two orders of magnitude. To minimize the waveform phase misfit, we implement a pre-conditioned conjugate gradient algorithm. Amplitude information is incorporated indirectly by a restricted line search. This ensures that the cumulative envelope misfit does not increase during the inversion. An efficient pre-conditioner is found empirically through numerical experiments. It prevents the concentration of structural heterogeneity near the sources and receivers. We apply our waveform tomographic method to ~1000 high-quality vertical-component seismograms, recorded in the Australasian region between 1993 and 2008. The waveforms comprise fundamental- and higher-mode surface and long-period S body waves in the period range from 50 to 200 s. To improve the convergence of the algorithm, we implement a 3-D initial model that contains the long-wavelength features of the Australasian region. Resolution tests indicate that our algorithm converges after around 10 iterations and that both long- and short-wavelength features in the uppermost mantle are well resolved. There is evidence for effects related to the non-linearity in the inversion procedure. After 11 iterations we fit the data waveforms acceptably well; with no significant further improvements to be expected. During the inversion the total fitted seismogram length increases by 46 per cent, providing a clear indication of the efficiency and consistency of the iterative optimization algorithm. The resulting SV-wave velocity model reveals structural features of the Australasian upper mantle with great detail. We confirm the existence of a pronounced low-velocity band along the eastern margin of the continent that can be clearly distinguished against Precambrian Australia and the microcontinental Lord Howe Rise. The transition from Precambrian to Phanerozoic Australia (the Tasman Line) appears to be sharp down to at least 200 km depth. It mostly occurs further east of where it is inferred from gravity and magnetic anomalies. Also clearly visible are the Archean and Proterozoic cratons, the northward continuation of the continent and anomalously low S-wave velocities in the upper mantle in central Australia. This is, to the best of our knowledge, the first application of non-linear full seismic waveform tomography to a continental-scale problem.

  16. Crustal and Upper Mantle Investigations Using Receiver Functions and Tomographic Inversion in the Southern Puna Plateau Region of the Central Andes

    NASA Astrophysics Data System (ADS)

    Heit, B.; Yuan, X.; Bianchi, M.; Jakovlev, A.; Kumar, P.; Kay, S. M.; Sandvol, E. A.; Alonso, R.; Coira, B.; Comte, D.; Brown, L. D.; Kind, R.

    2011-12-01

    We present here the results obtained using the data form our passive seismic array in the southern Puna plateau between 25°S to 28°S latitude in Argentina and Chile. In first instance we have been able to calculate P and S receiver functions in order to investigate the Moho thickness and other seismic discontinuities in the study area. The RF data shows that the northern Puna plateau has a thicker crust and that the Moho topography is more irregular along strike. The seismic structure and thickness of the continental crust and the lithospheric mantle beneath the southern Puna plateau reveals that the LAB is deeper to the north of the array suggesting lithospheric removal towards the south. Later we performed a joint inversion of teleseismic and regional tomographic data in order to study the distribution of velocity anomalies that could help us to better understand the evolution of the Andean elevated plateau and the role of lithosphere-asthenosphere interactions in this region. Low velocities are observed in correlation with young volcanic centers (e.g. Ojos del Salado, Cerro Blanco, Galan) and agree very well with the position of crustal lineaments in the region. This is suggesting a close relationship between magmatism and lithospheric structures at crustal scale coniciding with the presence of hot asthenospheric material at the base of the crust probably induced by lithospheric foundering.

  17. Code for Calculating Regional Seismic Travel Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minusmore » predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less

  18. Numerical Simulations to Assess ART and MART Performance for Ionospheric Tomography of Chapman Profiles.

    PubMed

    Prol, Fabricio S; Camargo, Paulo O; Muella, Marcio T A H

    2017-01-01

    The incomplete geometrical coverage of the Global Navigation Satellite System (GNSS) makes the ionospheric tomographic system an ill-conditioned problem for ionospheric imaging. In order to detect the principal limitations of the ill-conditioned tomographic solutions, numerical simulations of the ionosphere are under constant investigation. In this paper, we show an investigation of the accuracy of Algebraic Reconstruction Technique (ART) and Multiplicative ART (MART) for performing tomographic reconstruction of Chapman profiles using a simulated optimum scenario of GNSS signals tracked by ground-based receivers. Chapman functions were used to represent the ionospheric morphology and a set of analyses was conducted to assess ART and MART performance for estimating the Total Electron Content (TEC) and parameters that describes the Chapman function. The results showed that MART performed better in the reconstruction of the electron density peak and ART gave a better representation for estimating TEC and the shape of the ionosphere. Since we used an optimum scenario of the GNSS signals, the analyses indicate the intrinsic problems that may occur with ART and MART to recover valuable information for many applications of Telecommunication, Spatial Geodesy and Space Weather.

  19. A hydraulic tomography approach coupling travel time inversion with steady shape analysis based on aquifer analogue study in coarsely clastic fluvial glacial deposit

    NASA Astrophysics Data System (ADS)

    Hu, R.; Brauchler, R.; Herold, M.; Bayer, P.; Sauter, M.

    2009-04-01

    Rarely is it possible to draw a significant conclusion about the geometry and the properties of geological structures of the underground using the information which is typically obtained from boreholes, since soil exploration is only representative of the position where the soil sample is taken from. Conventional aquifer investigation methods like pumping tests can provide hydraulic properties of a larger area; however, they lead to integral information. This information is insufficient to develop groundwater models, especially contaminant transport models, which require information about the spatial distribution of the hydraulic properties of the subsurface. Hydraulic tomography is an innovative method which has the potential to spatially resolve three dimensional structures of natural aquifer bodies. The method employs hydraulic short term tests performed between two or more wells, whereby the pumped intervals (sources) and the observation points (receivers) are separated by double packer systems. In order to optimize the computationally intensive tomographic inversion of transient hydraulic data we have decided to couple two inversion approaches (a) hydraulic travel time inversion and (b) steady shape inversion. (a) Hydraulic travel time inversion is based on the solution of the travel time integral, which describes the relationship between travel time of maximum signal variation of a transient hydraulic signal and the diffusivity between source and receiver. The travel time inversion is computationally extremely effective and robust, however, it is limited to the determination of diffusivity. In order to overcome this shortcoming we use the estimated diffusivity distribution as starting model for the steady shape inversion with the goal to separate the estimated diffusivity distribution into its components, hydraulic conductivity and specific storage. (b) The steady shape inversion utilizes the fact that at steady shape conditions, drawdown varies with time but the hydraulic gradient does not. By this trick, transient data can be analyzed with the computational efficiency of a steady state model, which proceeds hundreds of times faster than transient models. Finally, a specific storage distribution can be calculated from the diffusivity and hydraulic conductivity reconstructions derived from travel time and steady shape inversion. The groundwork of this study is the aquifer-analogue study from BAYER (1999), in which six parallel profiles of a natural sedimentary body with a size of 16m x 10m x 7m were mapped in high resolution with respect to structural and hydraulic parameters. Based on these results and using geostatistical interpolation methods, MAJI (2005) designed a three dimensional hydraulic model with a resolution of 5cm x 5cm x 5cm. This hydraulic model was used to simulate a large number of short term pumping tests in a tomographical array. The high resolution parameter reconstructions gained from the inversion of simulated pumping test data demonstrate that the proposed inversion scheme allows reconstructing the individual architectural elements and their hydraulic properties with a higher resolution compared to conventional hydraulic and geological investigation methods. Bayer P (1999) Aquifer-Analog-Studium in grobklastischen braided river Ablagerungen: Sedimentäre/hydrogeologische Wandkartierung und Kalibrierung von Georadarmessungen, Diplomkartierung am Lehrstuhl für Angewandte Geologie, Universität Tübingen, 25 pp. Maji, R. (2005) Conditional Stochastic Modelling of DNAPL Migration and Dissolution in a High-resolution Aquifer Analog, Ph.D. thesis at the University of Waterloo, 187 pp.

  20. Joint refraction and reflection travel-time tomography of multichannel and wide-angle seismic data

    NASA Astrophysics Data System (ADS)

    Begovic, Slaven; Meléndez, Adrià; Ranero, César; Sallarès, Valentí

    2017-04-01

    Both near-vertical multichannel (MCS) and wide-angle (WAS) seismic data are sensitive to same properties of sampled model, but commonly they are interpreted and modeled using different approaches. Traditional MCS images provide good information on position and geometry of reflectors especially in shallow, commonly sedimentary layers, but have limited or no refracted waves, which severely hampers the retrieval of velocity information. Compared to MCS data, conventional wide-angle seismic (WAS) travel-time tomography uses sparse data (generally stations are spaced by several kilometers). While it has refractions that allow retrieving velocity information, the data sparsity makes it difficult to define velocity and the geometry of geologic boundaries (reflectors) with the appropriate resolution, especially at the shallowest crustal levels. A well-known strategy to overcome these limitations is to combine MCS and WAS data into a common inversion strategy. However, the number of available codes that can jointly invert for both types of data is limited. We have adapted the well-known and widely-used joint refraction and reflection travel-time tomography code tomo2d (Korenaga et al, 2000), and its 3D version tomo3d (Meléndez et al, 2015), to implement streamer data and multichannel acquisition geometries. This allows performing joint travel-time tomographic inversion based on refracted and reflected phases from both WAS and MCS data sets. We show with a series of synthetic tests following a layer-stripping strategy that combining these two data sets into joint travel-time tomographic method the drawbacks of each data set are notably reduced. First, we have tested traditional travel-time inversion scheme using only WAS data (refracted and reflected phases) with typical acquisition geometry with one ocean bottom seismometer (OBS) each 10 km. Second, we have jointly inverted WAS refracted and reflected phases with only streamer (MCS) reflection travel-times. And at the end we have performed joint inversion of combined refracted and reflected phases from both data sets. MCS data set (synthetic) has been produced for a 8 km-long streamer and refracted phases used for the streamer have been downward continued (projected on the seafloor). Taking advantage of high redundancy of MCS data, the definition of geometry of reflectors and velocity of uppermost layers are much improved. Additionally, long- offset wide-angle refracted phases minimize velocity-depth trade-off of reflection travel-time inversion. As a result, the obtained models have increased accuracy in both velocity and reflector's geometry as compared to the independent inversion of each data set. This is further corroborated by performing a statistical parameter uncertainty analysis to explore the effects of unknown initial model and data noise in the linearized inversion scheme.

  1. Mantle P wave travel time tomography of Eastern and Southern Africa: New images of mantle upwellings

    NASA Astrophysics Data System (ADS)

    Benoit, M. H.; Li, C.; van der Hilst, R.

    2006-12-01

    Much of Eastern Africa, including Ethiopia, Kenya, and Tanzania, has undergone extensive tectonism, including rifting, uplift, and volcanism during the Cenozoic. The cause of this tectonism is often attributed to the presence of one or more mantle upwellings, including starting thermal plumes and superplumes. Previous regional seismic studies and global tomographic models show conflicting results regarding the spatial and thermal characteristics of these upwellings. Additionally, there are questions concerning the extent to which the Archean and Proterozoic lithosphere has been altered by possible thermal upwellings in the mantle. To further constrain the mantle structure beneath Southern and Eastern Africa and to investigate the origin of the tectonism in Eastern Africa, we present preliminary results of a large-scale P wave travel time tomographic study of the region. We invert travel time measurements from the EHB database with travel time measurements taken from regional PASSCAL datasets including the Ethiopia Broadband Seismic Experiment (2000-2002); Kenya Broadband Seismic Experiment (2000-2002); Southern Africa Seismic Experiment (1997- 1999); Tanzania Broadband Seismic Experiment (1995-1997), and the Saudi Arabia PASSCAL Experiment (1995-1997). The tomographic inversion uses 3-D sensitivity kernels to combine different datasets and is parameterized with an irregular grid so that high spatial resolution can be obtained in areas of dense data coverage. It uses an adaptive least-squares context using the LSQR method with norm and gradient damping.

  2. Towards a Full Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2015-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.

  3. Subtalar joint stress imaging with tomosynthesis.

    PubMed

    Teramoto, Atsushi; Watanabe, Kota; Takashima, Hiroyuki; Yamashita, Toshihiko

    2014-06-01

    The purpose of this study was to perform stress imaging of hindfoot inversion and eversion using tomosynthesis and to assess the subtalar joint range of motion (ROM) of healthy subjects. The subjects were 15 healthy volunteers with a mean age of 29.1 years. Coronal tomosynthesis stress imaging of the subtalar joint was performed in a total of 30 left and right ankles. A Telos stress device was used for the stress load, and the load was 150 N for both inversion and eversion. Tomographic images in which the posterior talocalcaneal joint could be confirmed on the neutral position images were used in measurements. The angle of the intersection formed by a line through the lateral articular facet of the posterior talocalcaneal joint and a line through the surface of the trochlea of the talus was measured. The mean change in the angle of the calcaneus with respect to the talus was 10.3 ± 4.8° with inversion stress and 5.0 ± 3.8° with eversion stress from the neutral position. The result was a clearer depiction of the subtalar joint, and inversion and eversion ROM of the subtalar joint was shown to be about 15° in healthy subjects. Diagnostic, Level IV.

  4. Construction of the seismic wave-speed model by adjoint tomography beneath the Japanese metropolitan area

    NASA Astrophysics Data System (ADS)

    Miyoshi, Takayuki

    2017-04-01

    The Japanese metropolitan area has high risks of earthquakes and volcanoes associated with convergent tectonic plates. It is important to clarify detail three-dimensional structure for understanding tectonics and predicting strong motion. Classical tomographic studies based on ray theory have revealed seismotectonics and volcanic tectonics in the region, however it is unknown whether their models reproduce observed seismograms. In the present study, we construct new seismic wave-speed model by using waveform inversion. Adjoint tomography and the spectral element method (SEM) were used in the inversion (e.g. Tape et al. 2009; Peter et al. 2011). We used broadband seismograms obtained at NIED F-net stations for 140 earthquakes occurred beneath the Kanto district. We selected four frequency bands between 5 and 30 sec and used from the seismograms of longer period bands for the inversion. Tomographic iteration was conducted until obtaining the minimized misfit between data and synthetics. Our SEM model has 16 million grid points that covers the metropolitan area of the Kanto district. The model parameters were the Vp and Vs of the grid points, and density and attenuation were updated to new values depending on new Vs in each iteration. The initial model was assumed the tomographic model (Matsubara and Obara 2011) based on ray theory. The source parameters were basically used from F-net catalog, while the centroid times were inferred from comparison between data and synthetics. We simulated the forward and adjoint wavefields of each event and obtained Vp and Vs misfit kernels from their interaction. Large computation was conducted on K computer, RIKEN. We obtained final model (m16) after 16 iterations in the present study. For the waveform improvement, it is clearly shown that m16 is better than the initial model, and the seismograms especially improved in the frequency bands of longer than 8 sec and changed better for seismograms of the events occurred at deeper than a depth of 30 km. We found distinct low wave-speed patterns in S-wave structure. One of the patterns extends in the E-W direction around a depth of 40 km. This zone was interpreted as the serpentinized mantle above the Philippine Sea slab (e.g. Kamiya and Kobayashi 2000). We also obtained the low wave-speed zone around the depth of 5 km. It seems this area extends along the Median tectonic line and this area is correspond to the sedimentary layer. We thank the NIED for providing seismic data, and also thank the researchers for providing the SPECFEM Cartesian program package.

  5. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  6. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  7. Adults' understanding of inversion concepts: how does performance on addition and subtraction inversion problems compare to performance on multiplication and division inversion problems?

    PubMed

    Robinson, Katherine M; Ninowski, Jerilyn E

    2003-12-01

    Problems of the form a + b - b have been used to assess conceptual understanding of the relationship between addition and subtraction. No study has investigated the same relationship between multiplication and division on problems of the form d x e / e. In both types of inversion problems, no calculation is required if the inverse relationship between the operations is understood. Adult participants solved addition/subtraction and multiplication/division inversion (e.g., 9 x 22 / 22) and standard (e.g., 2 + 27 - 28) problems. Participants started to use the inversion strategy earlier and more frequently on addition/subtraction problems. Participants took longer to solve both types of multiplication/division problems. Overall, conceptual understanding of the relationship between multiplication and division was not as strong as that between addition and subtraction. One explanation for this difference in performance is that the operation of division is more weakly represented and understood than the other operations and that this weakness affects performance on problems of the form d x e / e.

  8. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  9. Influence of the limited detector size on spatial variations of the reconstruction accuracy in holographic tomography

    NASA Astrophysics Data System (ADS)

    Kostencka, Julianna; Kozacki, Tomasz; Hennelly, Bryan; Sheridan, John T.

    2017-06-01

    Holographic tomography (HT) allows noninvasive, quantitative, 3D imaging of transparent microobjects, such as living biological cells and fiber optics elements. The technique is based on acquisition of multiple scattered fields for various sample perspectives using digital holographic microscopy. Then, the captured data is processed with one of the tomographic reconstruction algorithms, which enables 3D reconstruction of refractive index distribution. In our recent works we addressed the issue of spatially variant accuracy of the HT reconstructions, which results from the insufficient model of diffraction that is applied in the widely-used tomographic reconstruction algorithms basing on the Rytov approximation. In the present study, we continue investigating the spatially variant properties of the HT imaging, however, we are now focusing on the limited spatial size of holograms as a source of this problem. Using the Wigner distribution representation and the Ewald sphere approach, we show that the limited size of the holograms results in a decreased quality of tomographic imaging in off-center regions of the HT reconstructions. This is because the finite detector extent becomes a limiting aperture that prohibits acquisition of full information about diffracted fields coming from the out-of-focus structures of a sample. The incompleteness of the data results in an effective truncation of the tomographic transfer function for the out-of-center regions of the tomographic image. In this paper, the described effect is quantitatively characterized for three types of the tomographic systems: the configuration with 1) object rotation, 2) scanning of the illumination direction, 3) the hybrid HT solution combing both previous approaches.

  10. Slab seismicity in the Western Hellenic Subduction Zone: Constraints from tomography and double-difference relocation

    NASA Astrophysics Data System (ADS)

    Halpaap, Felix; Rondenay, Stéphane; Ottemöller, Lars

    2016-04-01

    The Western Hellenic subduction zone is characterized by a transition from oceanic to continental subduction. In the southern oceanic portion of the system, abundant seismicity reaches intermediate depths of 100-120 km, while the northern continental portion rarely exhibits deep earthquakes. Our study aims to investigate how this oceanic-continental transition affects fluid release and related seismicity along strike, by focusing on the distribution of intermediate depth earthquakes. To obtain a detailed image of the seismicity, we carry out a tomographic inversion for P- and S-velocities and double-difference earthquake relocation using a dataset of unprecedented spatial coverage in this area. Here we present results of these analyses in conjunction with high-resolution profiles from migrated receiver function images obtained from the MEDUSA experiment. We generate tomographic models by inverting data from 237 manually picked, well locatable events recorded at up to 130 stations. Stations from the permanent Greek network and the EGELADOS experiment supplement the 3-D coverage of the modeled domain, which covers a large part of mainland Greece and surrounding offshore areas. Corrections for the sphericity of the Earth and our update to the SIMULR16 package, which now allows S-inversion, help improve our previous models. Flexible gridding focusses the inversion on the domains of highest gradient around the slab, and we evaluate the resolution with checker board tests. We use the resulting velocity model to relocate earthquakes via the Double-Difference method, using a large dataset of differential traveltimes obtained by crosscorrelation of seismograms. Tens of earthquakes align along two planes forming a double seismic zone in the southern, oceanic portion of the subduction zone. With increasing subduction depth, the earthquakes appear closer to the center of the slab, outlining probable deserpentinization of the slab and concomitant eclogitization of dry crustal rocks. Against expectations, we relocate one robust deep event at ≈70 km depth in the northern, continental part of the subduction zone.

  11. Wavefield complexity and stealth structures: Resolution constraints by wave physics

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Leng, K.

    2017-12-01

    Imaging the Earth's interior relies on understanding how waveforms encode information from heterogeneous multi-scale structure. This relation is given by elastodynamics, but forward modeling in the context of tomography primarily serves to deliver synthetic waveforms and gradients for the inversion procedure. While this is entirely appropriate, it depreciates a wealth of complementary inference that can be obtained from the complexity of the wavefield. Here, we are concerned with the imprint of realistic multi-scale Earth structure on the wavefield, and the question on the inherent physical resolution limit of structures encoded in seismograms. We identify parameter and scattering regimes where structures remain invisible as a function of seismic wavelength, structural multi-scale geometry, scattering strength, and propagation path. Ultimately, this will aid in interpreting tomographic images by acknowledging the scope of "forgotten" structures, and shall offer guidance for optimising the selection of seismic data for tomography. To do so, we use our novel 3D modeling method AxiSEM3D which tackles global wave propagation in visco-elastic, anisotropic 3D structures with undulating boundaries at unprecedented resolution and efficiency by exploiting the inherent azimuthal smoothness of wavefields via a coupled Fourier expansion-spectral-element approach. The method links computational cost to wavefield complexity and thereby lends itself well to exploring the relation between waveforms and structures. We will show various examples of multi-scale heterogeneities which appear or disappear in the waveform, and argue that the nature of the structural power spectrum plays a central role in this. We introduce the concept of wavefield learning to examine the true wavefield complexity for a complexity-dependent modeling framework and discriminate which scattering structures can be retrieved by surface measurements. This leads to the question of physical invisibility and the tomographic resolution limit, and offers insight as to why tomographic images still show stark differences for smaller-scale heterogeneities despite progress in modeling and data resolution. Finally, we give an outlook on how we expand this modeling framework towards an inversion procedure guided by wavefield complexity.

  12. Imaging of the Galapagos Plume Using a Network of Mermaids

    NASA Astrophysics Data System (ADS)

    Nolet, G.; Hello, Y.; Chen, J.; Pazmino, A.; Van der Lee, S.; Bonnieux, S.; Deschamps, A.; Regnier, M. M.; Font, Y.; Simons, F.

    2017-12-01

    A network of nine submarine seismographs (Mermaids) has been floating freely from 2014 to 2016 around the Galapagos islands, with the aim to enhance the resolving power of deep tomographic images of the mantle plume in this region (see poster by Hello et al. in session S002 for technical details).Analysing a total of 1329 triggered signals transmitted by satellite, we were able to pick the onset times of 434 P waves, 95 PKP and 26 pP arrivals. For the events recorded by at least one Mermaid, these data were complemented with hand-picked onsets from stations on the islands, or on the continent nearby, for a total of 3892 onset times of rays crossing the mantle beneath the Galapagos, many of them with a small standard error estimated at 0.3s. These data are used in a local inversion using ray theory, as is appropriate for onset times. To compensate for delays acquired in the rest of the Earth, the local model is embedded in a global inversion of P delays from the EHB data set most recently published by the ISC for 2000-2003. By selecting a strongly redundant subset of more than one million EHB P wave arrivals, we determined an objective standard error for these delays of 0.51s using the method of Voronin et al. (GJI, 2014). Using a combination of (strong) smoothing and (weak) damping, we force the tomographic model to fit the data close to the level of the estimated standard errors.Preliminary images obtained at the time of writing of this abstract indicate a deep reaching plume that is stronger in the lower mantle than near the surface.Most importantly, the experiment shows how even a limited number of Mermaids can contribute a significant gain in resolution. This is a direct consequence of the fact that they float with abyssal currents, thus avoiding redundancy in raypaths even for aftershocks.The final tomographic images and an analysis of their significance will be subject of the presentation.

  13. Direct ambient noise tomography for 3-D near surface shear velocity structure: methodology and applications

    NASA Astrophysics Data System (ADS)

    Yao, H.; Fang, H.; Li, C.; Liu, Y.; Zhang, H.; van der Hilst, R. D.; Huang, Y. C.

    2014-12-01

    Ambient noise tomography has provided essential constraints on crustal and uppermost mantle shear velocity structure in global seismology. Recent studies demonstrate that high frequency (e.g., ~ 1 Hz) surface waves between receivers at short distances can be successfully retrieved from ambient noise cross-correlation and then be used for imaging near surface or shallow crustal shear velocity structures. This approach provides important information for strong ground motion prediction in seismically active area and overburden structure characterization in oil and gas fields. Here we propose a new tomographic method to invert all surface wave dispersion data for 3-D variations of shear wavespeed without the intermediate step of phase or group velocity maps.The method uses frequency-dependent propagation paths and a wavelet-based sparsity-constrained tomographic inversion. A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. This avoids the assumption of great-circle propagation that is used in most surface wave tomographic studies, but which is not appropriate in complex media. The wavelet coefficients of the velocity model are estimated with an iteratively reweighted least squares (IRLS) algorithm, and upon iterations the surface wave ray paths and the data sensitivity matrix are updated from the newly obtained velocity model. We apply this new method to determine the 3-D near surface wavespeed variations in the Taipei basin of Taiwan, Hefei urban area and a shale and gas production field in China using the high-frequency interstation Rayleigh wave dispersion data extracted from ambient noisecross-correlation. The results reveal strong effects of off-great-circle propagation of high-frequency surface waves in these regions with above 30% shear wavespeed variations. The proposed approach is more efficient and robust than the traditional two-step surface wave tomography for imaging complex structures. In the future, approximate 3-D sensitivity kernels for dispersion data will be incorporated to account for finite-frequency effect of surface wave propagation. In addition, our approach provides a consistent framework for joint inversion of surface wave dispersion and body wave traveltime data for 3-D Vp and Vs structures.

  14. Children's Understanding of the Arithmetic Concepts of Inversion and Associativity

    ERIC Educational Resources Information Center

    Robinson, Katherine M.; Ninowski, Jerilyn E.; Gray, Melissa L.

    2006-01-01

    Previous studies have shown that even preschoolers can solve inversion problems of the form a + b - b by using the knowledge that addition and subtraction are inverse operations. In this study, a new type of inversion problem of the form d x e [divided by] e was also examined. Grade 6 and 8 students solved inversion problems of both types as well…

  15. Single-shot ultrafast tomographic imaging by spectral multiplexing

    NASA Astrophysics Data System (ADS)

    Matlis, N. H.; Axley, A.; Leemans, W. P.

    2012-10-01

    Computed tomography has profoundly impacted science, medicine and technology by using projection measurements scanned over multiple angles to permit cross-sectional imaging of an object. The application of computed tomography to moving or dynamically varying objects, however, has been limited by the temporal resolution of the technique, which is set by the time required to complete the scan. For objects that vary on ultrafast timescales, traditional scanning methods are not an option. Here we present a non-scanning method capable of resolving structure on femtosecond timescales by using spectral multiplexing of a single laser beam to perform tomographic imaging over a continuous range of angles simultaneously. We use this technique to demonstrate the first single-shot ultrafast computed tomography reconstructions and obtain previously inaccessible structure and position information for laser-induced plasma filaments. This development enables real-time tomographic imaging for ultrafast science, and offers a potential solution to the challenging problem of imaging through scattering surfaces.

  16. Solid Solution Characterization in Metal by Original Tomographic Scanning Microwave Microscopy Technique

    NASA Astrophysics Data System (ADS)

    Bourillot, Eric; Vitry, Pauline; Optasanu, Virgil; Plassard, Cédric; Lacroute, Yvon; Montessin, Tony; Lesniewska, Eric

    A general challenge in metallic components is the need for materials research to improve the service lifetime of the structural tanks or tubes subjected to harsh environments or the storage medium for the products. One major problem is the formation of lightest chemical elements bubbles or different chemical association, which can have a significant impact on the mechanical properties and structural stability of materials. The high migration mobility of these light chemical elements in solids presents a challenge for experimental characterization. Here, we present work relating to an original non-destructive, with high spatial resolution, tomographic technique based on Scanning Microwave Microscopy (SMM), which is used to visualize in-depth chemical composition of solid solution of a light chemical element in a metal. The experiments showed the capacity of SMM to detect volume. Measurements realized at different frequencies give access to a tomographic study of the sample.

  17. Tomographic reconstruction of atmospheric turbulence with the use of time-dependent stochastic inversion.

    PubMed

    Vecherin, Sergey N; Ostashev, Vladimir E; Ziemann, A; Wilson, D Keith; Arnold, K; Barth, M

    2007-09-01

    Acoustic travel-time tomography allows one to reconstruct temperature and wind velocity fields in the atmosphere. In a recently published paper [S. Vecherin et al., J. Acoust. Soc. Am. 119, 2579 (2006)], a time-dependent stochastic inversion (TDSI) was developed for the reconstruction of these fields from travel times of sound propagation between sources and receivers in a tomography array. TDSI accounts for the correlation of temperature and wind velocity fluctuations both in space and time and therefore yields more accurate reconstruction of these fields in comparison with algebraic techniques and regular stochastic inversion. To use TDSI, one needs to estimate spatial-temporal covariance functions of temperature and wind velocity fluctuations. In this paper, these spatial-temporal covariance functions are derived for locally frozen turbulence which is a more general concept than a widely used hypothesis of frozen turbulence. The developed theory is applied to reconstruction of temperature and wind velocity fields in the acoustic tomography experiment carried out by University of Leipzig, Germany. The reconstructed temperature and velocity fields are presented and errors in reconstruction of these fields are studied.

  18. Comparative evolution of the inverse problems (Introduction to an interdisciplinary study of the inverse problems)

    NASA Technical Reports Server (NTRS)

    Sabatier, P. C.

    1972-01-01

    The progressive realization of the consequences of nonuniqueness imply an evolution of both the methods and the centers of interest in inverse problems. This evolution is schematically described together with the various mathematical methods used. A comparative description is given of inverse methods in scientific research, with examples taken from mathematics, quantum and classical physics, seismology, transport theory, radiative transfer, electromagnetic scattering, electrocardiology, etc. It is hoped that this paper will pave the way for an interdisciplinary study of inverse problems.

  19. Broad-band Lg Attenuation Tomography in Eastern Eurasia and The Resolution, Uncertainty and Data Predication

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Xu, X.

    2017-12-01

    The broad band Lg 1/Q tomographic models in eastern Eurasia are inverted from source- and site-corrected path 1/Q data. The path 1/Q are measured between stations (or events) by the two-station (TS), reverse two-station (RTS) and reverse two-event (RTE) methods, respectively. Because path 1/Q are computed using logarithm of the product of observed spectral ratios and simplified 1D geometrical spreading correction, they are subject to "modeling errors" dominated by uncompensated 3D structural effects. We have found in Chen and Xie [2017] that these errors closely follow normal distribution after the long-tailed outliers are screened out (similar to teleseismic travel time residuals). We thus rigorously analyze the statistics of these errors collected from repeated samplings of station (and event) pairs from 1.0 to 10.0Hz and reject about 15% outliers at each frequency band. The resultant variance of Δ/Q decreases with frequency as 1/f2. The 1/Q tomography using screened data is now a stochastic inverse problem with solutions approximate the means of Gaussian random variables and the model covariance matrix is that of Gaussian variables with well-known statistical behavior. We adopt a new SVD based tomographic method to solve for 2D Q image together with its resolution and covariance matrices. The RTS and RTE yield the most reliable 1/Q data free of source and site effects, but the path coverage is rather sparse due to very strict recording geometry. The TS absorbs the effects of non-unit site response ratios into 1/Q data. The RTS also yields site responses, which can then be corrected from the path 1/Q of TS to make them also free of site effect. The site corrected TS data substantially improve path coverage, allowing able to solve for 1/Q tomography up to 6.0Hz. The model resolution and uncertainty are first quantitively accessed by spread functions (fulfilled by resolution matrix) and covariance matrix. The reliably retrieved Q models correlate well with the distinct tectonic blocks featured by the most recent major deformations and vary with frequencies. With the 1/Q tomographic model and its covariance matrix, we can formally estimate the uncertainty of any path-specific Lg 1/Q prediction. This new capability significantly benefits source estimation for which reliable uncertainty estimate is especially important.

  20. Effects of the measurement configuration in GPR prospecting

    NASA Astrophysics Data System (ADS)

    Persico, Raffaele; Soldovieri, Francesco

    2017-04-01

    The measurement configuration is an issue of great interest in problems of inverse scattering in general, and in particular in problems regarding GPR data. In particular, the measurement configuration has an influence on the amount of retrievable information [1-2] and can be a way to achieve an intrinsic two dimensional filtering of the data [3], possibly accounting for the characteristics of the exploited antennas too [4]. However, no filter is able to erase exactly the undesired contribution to the comprehensive signal while leaving unperturbed the useful part of the gathered datum. In other word, any filtering of the data (included those implicitly imposed through the measurement configuration) has some price in terms of loss or distortion of the received information, and therefore it has to be applied only when needed and only at the right degree of intensity. In particular, differential measurement configurations have been introduced in the last few years, especially with interest in the field of detection of UXO [5-6]. The filtering effects in some differential configuration are not immediately understood, but need some deep reasoning. In particular, the theory of the diffraction tomography, allows to quantify the retrievable spatial frequencies under the measurement configuration at hand, and so allows to quantify the filtering effect of the differential configurations. Examples will be shown at the conference, regarding both a horizontal and a vertical differential configuration. References [1] R. Persico, R. Bernini, F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the Born approximation", IEEE Trans. On Antennas and Prop., vol. 53, n. 6, pp. 1875-1886, June 2005. [2] R. Persico, "On the role of measurement Configuration in Contactless GPR data Processing by Means of Linear Inverse Scattering, IEEE Trans. On Antennas and Prop AP, Vol. 54 n. 7 p. 2062-2071, July 2006. [3] R. Persico, F. Soldovieri, Effects of the Background Removal in Linear Inverse Scattering, IEEE Trans. on Geos. and Rem. Sens., vol. 46, n. 4, pp. 1104-1114, April 2008. [4] F. Soldovieri, R. Persico and G. Leone, "Effect of source and receiver radiation characteristics in subsurface prospecting within the DBA", Radio Science, vol. 40, RS3006, May 2005. [5] R. Persico, F. Soldovieri, A Microwave Tomography approach for a Differential Configuration in GPR Prospecting, IEEE Trans. On Antennas and Prop AP, vol. 54, n. 11, pp. 3541-3548, November 2006. [6] R. Persico, G. Pochanin, V. Ruban, I. Catapano, F. Soldovieri, Performances of a Microwave Tomographic Algorithm for GPR Systems Working in Differential Configuration, IEEE Jstars, vol. 9, n. 4, pp. 1343-1356, April 2016.

  1. Crustal Structure of the PARANÁ Basin from Ambient Noise Tomography

    NASA Astrophysics Data System (ADS)

    Collaço, B.; Assumpcao, M.; Rosa, M. L.; Sanchez, G.

    2013-12-01

    Previous surface-wave tomography in South America (SA) (e.g., Feng et al., 2004; 2007) mapped the main large-scale features of the continent, such as the high lithospheric velocities in cratonic areas and low velocities in the Patagonian province. However, more detailed features such as the Paraná Basin, have not been mapped with good resolution because of poor path coverage, i.e. classic surface- wave tomography has low resolution in low-seismicity areas, like Brazil and the Eastern Argentina. Crustal structure in Southern Brazil is poorly known. Most paths used by Feng et al. (2007) in this region are roughly parallel, which prevents good spatial resolution in tomographic inversions. This work is part of a major project that will increase knowledge of crustal structure in Southern Brazil and Eastern Argentina and is being carried out by IAG-USP (Brazil) in collaboration with UNLP and INPRES (Argentina). To improve resolution for the Paraná Basin we used inter-station dispersion curves derived from correlation of ambient noise for new stations deployed with the implementation of the Brazilian Seismic Network (Pirchiner et al. 2011). This technique, known as ambient noise tomography (ANT), was first applied by Shapiro et al. (2005) and is now expanding rapidly, especially in areas with high density of seismic stations (e.g. Bensen et al. 2007, Lin et al. 2008, Moschetti et al. 2010). ANT is a well-established method to estimate short period (< 20s) and intermediate periods (20 - 50s) surface wave speeds both in regional or continental scales (Lin et al. 2008). ANT data processing in this work was similar to the one described by Bensen et al. 2007, in four major steps with addition of a data inversion step. Group velocities between pairs of stations were derived from correlation of two years of ambient noise in the period range 5 to 60 s. The dispersion curves measurements were made using a modified version of PGSWMFA (PGplot Surface Wave Multiple Filter Analysis) code, designed by Chuck Ammon (St. Louis University) and successfully applied by Pasyanos et al. (2001). Our modified version is no more event based and is working now with station pairs. For the tomographic group velocities maps, we used the conjugate gradient method with 2nd derivative smoothing applied by Pasyanos et al. 2001. The group velocities maps were generated with one degree grid. For the tomographic inversion, we also add data derived from traditional dispersion measurements for earthquakes in SA. The velocity maps obtained for periods of 10 to 100s correspond generally well with data from previous studies (Feng et al, 2007), validating the use of ANT and contributing to increase resolution of tomography data in SA. The inversion maps obtained with 2nd derivative smoothing are more unstable at boundary zones for the inversion of sediments and crustal thickness. It can be explained by the smoothness factor, which is not reduced at expected discontinuities such as ocean/continent boundaries. As the steps of data processing are well defined and independent, as new stations are deployed with the progress of the Brasis Project (Pirchiner et al. 2011) new paths will be added to the initial database, increasing the resolution and reliability of the results. This work is funded by Petrobras with additional support from CNPq and FAPESP.

  2. Statistical and operational considerations for designs for x-ray tomographic spectrophotometry to detect, localize, and classify foreign objects in various systems

    NASA Astrophysics Data System (ADS)

    Fennelly, Alphonsus J.; Fry, Edward L.; Zukic, Muamer; Wilson, Michele M.; Janik, Tadeusz J.; Torr, Douglas G.

    1994-11-01

    In six companion papers we discuss a capability for x-ray tomographic spectrophotometry at three energy ranges to observe foreign objects in various systems using a novel x-ray optical and photometric approach. We describe new types of thin-film x-ray reflecting filters to provide energy-specific optical trains, inserted into existing x-ray interrogation systems. That is complemented by performing topographic imaging at a few, to several, energies in each case. That provides a full topographic and spectrophotometric analysis. Foreign objects can then be detected, localized, discriminated, and classified, so that they may be dealt with by excision, and replacement with benign system elements. We analyze statistical and operational concerns leading to the design of three systems: The first operates at x-ray energies of 1 - 10 keV; it deals with defects in microelectronic integrated circuits. The second operates at x-ray energies of 10 - 30 keV; it deals with the defects in human tissue. The chemical specificity and image resolution of the system will allow identification, localization, and mensuration of tumors without the need of biopsy. The system which we concentrate this discussion on, the third, operates at x- ray energies of 30 - 70 keV; it deals with the presence in transportation systems of explosive devices, and contraband materials and objects in luggage and cargo. We present the analysis of the statistical features of the detection problem in these types of systems, discussing the operational constraints which limits system performance. After considering the multivariate, multisignature, approach to the problem, we discuss the tomographic and spectrophotometric approach to the problem which yields a better solution to the detection problem within the operational constraints.

  3. The preliminary results: Internal seismic velocity structure imaging beneath Mount Lokon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Firmansyah, Rizky, E-mail: rizkyfirmansyah@hotmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id; Kristianto, E-mail: kris@vsi.esdm.go.id

    2015-04-24

    Historical records that before the 17{sup th} century, Mount Lokon had been dormant for approximately 400 years. In the years between 1350 and 1400, eruption ever recorded in Empung, came from Mount Lokon’s central crater. Subsequently, in 1750 to 1800, Mount Lokon continued to erupt again and caused soil damage and fall victim. After 1949, Mount Lokon dramatically increased in its frequency: the eruption interval varies between 1 – 5 years, with an average interval of 3 years and a rest interval ranged from 8 – 64 years. Then, on June 26{sup th}, 2011, standby alert set by the Centermore » for Volcanology and Geological Hazard Mitigation. Peak activity happened on July 4{sup th}, 2011 that Mount Lokon erupted continuously until August 28{sup th}, 2011. In this study, we carefully analyzed micro-earthquakes waveform and determined hypocenter location of those events. We then conducted travel time seismic tomographic inversion using SIMULPS12 method to detemine Vp, Vs and Vp/Vs ratio structures beneath Lokon volcano in order to enhance our subsurface geological structure. During the tomographic inversion, we started from 1-D seismic velocities model obtained from VELEST33 method. Our preliminary results show low Vp, low Vs, and high Vp/Vs are observed beneath Mount Lokon-Empung which are may be associated with weak zone or hot material zones. However, in this study we used few station for recording of micro-earthquake events. So, we suggest in the future tomography study, the adding of some seismometers in order to improve ray coverage in the region is profoundly justified.« less

  4. BOOK REVIEW: Inverse Problems. Activities for Undergraduates

    NASA Astrophysics Data System (ADS)

    Yamamoto, Masahiro

    2003-06-01

    This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight into the nature of inverse problems and the appropriate mode of thought, chapter 1 offers historical vignettes, most of which have played an essential role in the development of natural science. These vignettes cover the first successful application of `non-destructive testing' by Archimedes (page 4) via Newton's laws of motion up to literary tomography, and readers will be able to enjoy a wide overview of inverse problems. Therefore, as the author asks, the reader should not skip this chapter. This may not be hard to do, since the headings of the sections are quite intriguing (`Archimedes' Bath', `Another World', `Got the Time?', `Head Games', etc). The author embarks on the technical approach to inverse problems in chapter 2. He has elegantly designed each section with a guide specifying course level, objective, mathematical and scientifical background and appropriate technology (e.g. types of calculators required). The guides are designed such that teachers may be able to construct effective and attractive courses by themselves. The book is not intended to offer one rigidly determined course, but should be used flexibly and independently according to the situation. Moreover, every section closes with activities which can be chosen according to the students' interests and levels of ability. Some of these exercises do not have ready solutions, but require long-term study, so readers are not required to solve all of them. After chapter 5, which contains discrete inverse problems such as the algebraic reconstruction technique and the Backus - Gilbert method, there are answers and commentaries to the activities. Finally, scripts in MATLAB are attached, although they can also be downloaded from the author's web page (http://math.uc.edu/~groetsch/). This book is aimed at students but it will be very valuable to researchers wishing to retain a wide overview of inverse problems in the midst of busy research activities. A Japanese version was published in 2002.

  5. Assessment of crustal velocity models using seismic refraction and reflection tomography

    NASA Astrophysics Data System (ADS)

    Zelt, Colin A.; Sain, Kalachand; Naumenko, Julia V.; Sawyer, Dale S.

    2003-06-01

    Two tomographic methods for assessing velocity models obtained from wide-angle seismic traveltime data are presented through four case studies. The modelling/inversion of wide-angle traveltimes usually involves some aspects that are quite subjective. For example: (1) identifying and including later phases that are often difficult to pick within the seismic coda, (2) assigning specific layers to arrivals, (3) incorporating pre-conceived structure not specifically required by the data and (4) selecting a model parametrization. These steps are applied to maximize model constraint and minimize model non-uniqueness. However, these steps may cause the overall approach to appear ad hoc, and thereby diminish the credibility of the final model. The effect of these subjective choices can largely be addressed by estimating the minimum model structure required by the least subjective portion of the wide-angle data set: the first-arrival times. For data sets with Moho reflections, the tomographic velocity model can be used to invert the PmP times for a minimum-structure Moho. In this way, crustal velocity and Moho models can be obtained that require the least amount of subjective input, and the model structure that is required by the wide-angle data with a high degree of certainty can be differentiated from structure that is merely consistent with the data. The tomographic models are not intended to supersede the preferred models, since the latter model is typically better resolved and more interpretable. This form of tomographic assessment is intended to lend credibility to model features common to the tomographic and preferred models. Four case studies are presented in which a preferred model was derived using one or more of the subjective steps described above. This was followed by conventional first-arrival and reflection traveltime tomography using a finely gridded model parametrization to derive smooth, minimum-structure models. The case studies are from the SE Canadian Cordillera across the Rocky Mountain Trench, central India across the Narmada-Son lineament, the Iberia margin across the Galicia Bank, and the central Chilean margin across the Valparaiso Basin and a subducting seamount. These case studies span the range of modern wide-angle experiments and data sets in terms of shot-receiver spacing, marine and land acquisition, lateral heterogeneity of the study area, and availability of wide-angle reflections and coincident near-vertical reflection data. The results are surprising given the amount of structure in the smooth, tomographically derived models that is consistent with the more subjectively derived models. The results show that exploiting the complementary nature of the subjective and tomographic approaches is an effective strategy for the analysis of wide-angle traveltime data.

  6. Visual computed tomographic scoring of emphysema and its correlation with its diagnostic electrocardiographic sign: the frontal P vector.

    PubMed

    Chhabra, Lovely; Sareen, Pooja; Gandagule, Amit; Spodick, David H

    2012-03-01

    Verticalization of the frontal P vector in patients older than 45 years is virtually diagnostic of pulmonary emphysema (sensitivity, 96%; specificity, 87%). We investigated the correlation of P vector and the computed tomographic visual score of emphysema (VSE) in patients with established diagnosis of chronic obstructive pulmonary disease/emphysema. High-resolution computed tomographic scans of 26 patients with emphysema (age, >45 years) were reviewed to assess the type and extent of emphysema using the subjective visual scoring. Electrocardiograms were independently reviewed to determine the frontal P vector. The P vector and VSE were compared for statistical correlation. Both P vector and VSE were also directly compared with the forced expiratory volume at 1 second. The VSE and the orientation of the P vector (ÂP) had an overall significant positive correlation (r = +0.68; P = .0001) in all patients, but the correlation was very strong in patients with predominant lower-lobe emphysema (r = +0.88; P = .0004). Forced expiratory volume at 1 second and ÂP had almost a linear inverse correlation in predominant lower-lobe emphysema (r = -0.92; P < .0001). Orientation of the P vector positively correlates with visually scored emphysema. Both ÂP and VSE are strong reflectors of qualitative lung function in patients with predominant lower-lobe emphysema. A combination of more vertical ÂP and predominant lower-lobe emphysema reflects severe obstructive lung dysfunction. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. A New Comprehensive Model for Crustal and Upper Mantle Structure of the European Plate

    NASA Astrophysics Data System (ADS)

    Morelli, A.; Danecek, P.; Molinari, I.; Postpischl, L.; Schivardi, R.; Serretti, P.; Tondi, M. R.

    2009-12-01

    We present a new comprehensive model of crustal and upper mantle structure of the whole European Plate — from the North Atlantic ridge to Urals, and from North Africa to the North Pole — describing seismic speeds (P and S) and density. Our description of crustal structure merges information from previous studies: large-scale compilations, seismic prospection, receiver functions, inversion of surface wave dispersion measurements and Green functions from noise correlation. We use a simple description of crustal structure, with laterally-varying sediment and cristalline layers thickness and seismic parameters. Most original information refers to P-wave speed, from which we derive S speed and density from scaling relations. This a priori crustal model by itself improves the overall fit to observed Bouguer anomaly maps, as derived from GRACE satellite data, over CRUST2.0. The new crustal model is then used as a constraint in the inversion for mantle shear wave speed, based on fitting Love and Rayleigh surface wave dispersion. In the inversion for transversely isotropic mantle structure, we use group speed measurements made on European event-to-station paths, and use a global a priori model (S20RTS) to ensure fair rendition of earth structure at depth and in border areas with little coverage from our data. The new mantle model sensibly improves over global S models in the imaging of shallow asthenospheric (slow) anomalies beneath the Alpine mobile belt, and fast lithospheric signatures under the two main Mediterranean subduction systems (Aegean and Tyrrhenian). We map compressional wave speed inverting ISC travel times (reprocessed by Engdahl et al.) with a non linear inversion scheme making use of finite-difference travel time calculation. The inversion is based on an a priori model obtained by scaling the 3D mantle S-wave speed to P. The new model substantially confirms images of descending lithospheric slabs and back-arc shallow asthenospheric regions, shown in other more local high-resolution tomographic studies, but covers the whole range of the European Plate. We also obtain three-dimensional mantle density structure by inversion of GRACE Bouguer anomalies locally adjusting density and the scaling relation between seismic wave speeds and density. We validate the new comprehensive model through comparison of recorded seismograms with numerical simulations based on SPECFEM3D. This work is a contribution towards the definition of a reference earth model for Europe. To this extent, in order to improve model dissemination and comparison, we propose the adoption of a common exchange format for tomographic earth models based on JSON, a lightweight data-interchange format supported by most high-level programming languages. We provide tools for manipulating and visualising models, described in this standard format, in Google Earth and GEON IDV.

  8. Shear wave velocity structure in North America from large-scale waveform inversions of surface waves

    USGS Publications Warehouse

    Alsina, D.; Woodward, R.L.; Snieder, R.K.

    1996-01-01

    A two-step nonlinear and linear inversion is carried out to map the lateral heterogeneity beneath North America using surface wave data. The lateral resolution for most areas of the model is of the order of several hundred kilometers. The most obvious feature in the tomographic images is the rapid transition between low velocities in the technically active region west of the Rocky Mountains and high velocities in the stable central and eastern shield of North America. The model also reveals smaller-scale heterogeneous velocity structures. A high-velocity anomaly is imaged beneath the state of Washington that could be explained as the subducting Juan de Fuca plate beneath the Cascades. A large low-velocity structure extends along the coast from the Mendocino to the Rivera triple junction and to the continental interior across the southwestern United States and northwestern Mexico. Its shape changes notably with depth. This anomaly largely coincides with the part of the margin where no lithosphere is consumed since the subduction has been replaced by a transform fault. Evidence for a discontinuous subduction of the Cocos plate along the Middle American Trench is found. In central Mexico a transition is visible from low velocities across the Trans-Mexican Volcanic Belt (TMVB) to high velocities beneath the Yucatan Peninsula. Two elongated low-velocity anomalies beneath the Yellowstone Plateau and the eastern Snake River Plain volcanic system and beneath central Mexico and the TMVB seem to be associated with magmatism and partial melting. Another low-velocity feature is seen at depths of approximately 200 km beneath Florida and the Atlantic Coastal Plain. The inversion technique used is based on a linear surface wave scattering theory, which gives tomographic images of the relative phase velocity perturbations in four period bands ranging from 40 to 150 s. In order to find a smooth reference model a nonlinear inversion based on ray theory is first performed. After correcting for the crustal thickness the phase velocity perturbations obtained from the subsequent linear waveform inversion for the different period bands are converted to a three-layer model of S velocity perturbations (layer 1, 25-100 km; layer 2, 100-200 km) layer 3, 200-300 km). We have applied this method on 275 high-quality Rayleigh waves recorded by a variety of instruments in North America (IRIS/USGS, IRIS/IDA, TERRAscope, RSTN). Sensitivity tests indicate that the lateral resolution is especially good in the densely sampled western continental United States, Mexico, and the Gulf of Mexico.

  9. In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie

    2015-03-01

    Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.

  10. Tomographic PIV: particles versus blobs

    NASA Astrophysics Data System (ADS)

    Champagnat, Frédéric; Cornic, Philippe; Cheminet, Adam; Leclaire, Benjamin; Le Besnerais, Guy; Plyer, Aurélien

    2014-08-01

    We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s point spread function (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels.

  11. Steady shape analysis of tomographic pumping tests for characterization of aquifer heterogeneities

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Zhan, Xiaoyong; Butler, James J.; Zheng, Li

    2002-01-01

    Hydraulic tomography, a procedure involving the performance of a suite of pumping tests in a tomographic format, provides information about variations in hydraulic conductivity at a level of detail not obtainable with traditional well tests. However, analysis of transient data from such a suite of pumping tests represents a substantial computational burden. Although steady state responses can be analyzed to reduce this computational burden significantly, the time required to reach steady state will often be too long for practical applications of the tomography concept. In addition, uncertainty regarding the mechanisms driving the system to steady state can propagate to adversely impact the resulting hydraulic conductivity estimates. These disadvantages of a steady state analysis can be overcome by exploiting the simplifications possible under the steady shape flow regime. At steady shape conditions, drawdown varies with time but the hydraulic gradient does not. Thus transient data can be analyzed with the computational efficiency of a steady state model. In this study, we demonstrate the value of the steady shape concept for inversion of hydraulic tomography data and investigate its robustness with respect to improperly specified boundary conditions.

  12. Probabilistic numerical methods for PDE-constrained Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark

    2017-06-01

    This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.

  13. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  14. Physics-based Inverse Problem to Deduce Marine Atmospheric Boundary Layer Parameters

    DTIC Science & Technology

    2017-03-07

    please find the Final Technical Report with SF 298 for Dr. Erin E. Hackett’s ONR grant entitled Physics-based Inverse Problem to Deduce Marine...From- To) 07/03/2017 Final Technica l Dec 2012- Dec 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Physics-based Inverse Problem to Deduce Marine...SUPPLEMENTARY NOTES 14. ABSTRACT This report describes research results related to the development and implementation of an inverse problem approach for

  15. 4D-tomographic reconstruction of water vapor using the hybrid regularization technique with application to the North West of Iran

    NASA Astrophysics Data System (ADS)

    Adavi, Zohre; Mashhadi-Hossainali, Masoud

    2015-04-01

    Water vapor is considered as one of the most important weather parameter in meteorology. Its non-uniform distribution, which is due to the atmospheric phenomena above the surface of the earth, depends both on space and time. Due to the limited spatial and temporal coverage of observations, estimating water vapor is still a challenge in meteorology and related fields such as positioning and geodetic techniques. Tomography is a method for modeling the spatio-temporal variations of this parameter. By analyzing the impact of troposphere on the Global Navigation Satellite (GNSS) signals, inversion techniques are used for modeling the water vapor in this approach. Non-uniqueness and instability of solution are the two characteristic features of this problem. Horizontal and/or vertical constraints are usually used to compute a unique solution for this problem. Here, a hybrid regularization method is used for computing a regularized solution. The adopted method is based on the Least-Square QR (LSQR) and Tikhonov regularization techniques. This method benefits from the advantages of both the iterative and direct techniques. Moreover, it is independent of initial values. Based on this property and using an appropriate resolution for the model, firstly the number of model elements which are not constrained by GPS measurement are minimized and then; water vapor density is only estimated at the voxels which are constrained by these measurements. In other words, no constraint is added to solve the problem. Reconstructed profiles of water vapor are validated using radiosonde measurements.

  16. Constrained inversion as a hypothesis testing tool, what can we learn about the lithosphere?

    NASA Astrophysics Data System (ADS)

    Moorkamp, Max; Stewart, Fishwick; Jones, Alan G.

    2017-04-01

    Inversion of geophysical data constrained by a reference model is typically used to guide the inversion of low resolution data towards a geologically plausible solution. For example, a migrated seismic section can provide the location of lithological boundaries for potential field inversions. Here we consider the inversion of long-period magnetotelluric data constrained by models generated through surface wave inversion. In this case, we do not consider the surface wave model inherently better in any sense and want to guide the magnetotelluric inversion towards this model, but we want to test the hypothesis that both datasets can be explained by models with similar structure. If the hypothesis test is successful, i.e. we can fit the observations with a conductivity model with structural similarity to the seismic model, we have found an alternative explanation compared to the individual inversion and can use the differences to learn about the resolution of the magnetotelluric data and can improve our interpretation. Conversely, if the test refutes our hypothesis of coincident structure, we have found features in the models that are sensed fundamentally different by both methods which is potentially instructive on the nature of the anomalies. We use a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons together with a tomographic model for the region to illustrate and test this approach. Here, various conductive structures have been identified that bridge the Moho. Furthermore, the thickness of the lithosphere inferred from the different methods differs. In both cases the question is in how far this is a result of the ill-posed nature of inversion and in how far these differences can be reconciled. Thus this dataset is an ideal test case for our hypothesis testing approach. Finally, we will demonstrate how we can use the results of the constrained inversion to extract conductivity-velocity relationships in the region and gain further insight into the composition and thermal structure of the lithosphere.

  17. Regional P-wave Tomography in the Caribbean Region for Plate Reconstruction

    NASA Astrophysics Data System (ADS)

    Li, X.; Bedle, H.; Suppe, J.

    2017-12-01

    The complex plate-tectonic interactions around the Caribbean Sea have been studied and interpreted by many researchers, but questions still remain regarding the formation and subduction history of the region. Here we report current progress towards creating a new regional tomographic model, with better lateral and spatial coverage and higher resolution than has been presented previously. This new model will provide improved constraints on the plate-tectonic evolution around the Caribbean Plate. Our three-dimensional velocity model is created using taut spline parameterization. The inversion is computed by the code of VanDecar (1991), which is based on the ray theory method. The seismic data used in this inversion are absolute P wave arrival times from over 700 global earthquakes that were recorded by over 400 near Caribbean stations. There are over 25000 arrival times that were picked and quality checked within frequency band of 0.01 - 0.6 Hz by using a MATLAB GUI-based software named Crazyseismic. The picked seismic delay time data are analyzed and compared with other studies ahead of doing the inversion model, in order to examine the quality of our dataset. From our initial observations of the delay time data, the more equalized the ray azimuth coverage, the smaller the deviation of the observed travel times from the theoretical travel time. Networks around the NE and SE side of the Caribbean Sea generally have better ray coverage, and smaller delay times. Specifically, seismic rays reaching SE Caribbean networks, such as XT network, generally pass through slabs under South American, Central American, Lesser Antilles, Southwest Caribbean, and the North Caribbean transform boundary, which leads to slightly positive average delay times. In contrast, the Puerto Rico network records seismic rays passing through regions that may lack slabs in the upper mantle and show slightly negative or near zero average delay times. These results agree with previous tomographic models. Based on our delay time observations, slabs and velocity structures near the East side of the Caribbean plate might be better imaged due to its denser ray coverage. More caution in selecting the seismic data for inversion on the west margin of Caribbean will be required to avoid possible smearing effects and artifacts from unequal ray path distributions.

  18. A new approach for implementation of associative memory using volume holographic materials

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad; Pashaie, Ramin

    2012-02-01

    Associative memory, also known as fault tolerant or content-addressable memory, has gained considerable attention in last few decades. This memory possesses important advantages over the more common random access memories since it provides the capability to correct faults and/or partially missing information in a given input pattern. There is general consensus that optical implementation of connectionist models and parallel processors including associative memory has a better record of success compared to their electronic counterparts. In this article, we describe a novel optical implementation of associative memory which not only has the advantage of all optical learning and recalling capabilities, it can also be realized easily. We present a new approach, inspired by tomographic imaging techniques, for holographic implementation of associative memories. In this approach, a volume holographic material is sandwiched within a matrix of inputs (optical point sources) and outputs (photodetectors). The memory capacity is realized by the spatial modulation of refractive index of the holographic material. Constructing the spatial distribution of the refractive index from an array of known inputs and outputs is formulated as an inverse problem consisting a set of linear integral equations.

  19. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  20. A field assessment of the value of steady shape hydraulic tomography for characterization of aquifer heterogeneities

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Butler, James J.; Zhan, Xiaoyong; Knoll, Michael D.

    2007-01-01

    Hydraulic tomography is a promising approach for obtaining information on variations in hydraulic conductivity on the scale of relevance for contaminant transport investigations. This approach involves performing a series of pumping tests in a format similar to tomography. We present a field‐scale assessment of hydraulic tomography in a porous aquifer, with an emphasis on the steady shape analysis methodology. The hydraulic conductivity (K) estimates from steady shape and transient analyses of the tomographic data compare well with those from a tracer test and direct‐push permeameter tests, providing a field validation of the method. Zonations based on equal‐thickness layers and cross‐hole radar surveys are used to regularize the inverse problem. The results indicate that the radar surveys provide some useful information regarding the geometry of the K field. The steady shape analysis provides results similar to the transient analysis at a fraction of the computational burden. This study clearly demonstrates the advantages of hydraulic tomography over conventional pumping tests, which provide only large‐scale averages, and small‐scale hydraulic tests (e.g., slug tests), which cannot assess strata connectivity and may fail to sample the most important pathways or barriers to flow.

  1. Radial reflection diffraction tomography

    DOEpatents

    Lehman, Sean K.

    2012-12-18

    A wave-based tomographic imaging method and apparatus based upon one or more rotating radially outward oriented transmitting and receiving elements have been developed for non-destructive evaluation. At successive angular locations at a fixed radius, a predetermined transmitting element can launch a primary field and one or more predetermined receiving elements can collect the backscattered field in a "pitch/catch" operation. A Hilbert space inverse wave (HSIW) algorithm can construct images of the received scattered energy waves using operating modes chosen for a particular application. Applications include, improved intravascular imaging, bore hole tomography, and non-destructive evaluation (NDE) of parts having existing access holes.

  2. Radial Reflection diffraction tomorgraphy

    DOEpatents

    Lehman, Sean K

    2013-11-19

    A wave-based tomographic imaging method and apparatus based upon one or more rotating radially outward oriented transmitting and receiving elements have been developed for non-destructive evaluation. At successive angular locations at a fixed radius, a predetermined transmitting element can launch a primary field and one or more predetermined receiving elements can collect the backscattered field in a "pitch/catch" operation. A Hilbert space inverse wave (HSIW) algorithm can construct images of the received scattered energy waves using operating modes chosen for a particular application. Applications include, improved intravascular imaging, bore hole tomography, and non-destructive evaluation (NDE) of parts having existing access holes.

  3. Objectives and Layout of a High-Resolution X-ray Imaging Crystal Spectrometer for the Large Helical Device (LHD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bitter, M; Gates, D; Monticello, D

    A high-resolution X-ray imaging crystal spectrometer, whose concept was tested on NSTX and Alcator C-Mod, is being designed for LHD. This instrument will record spatially resolved spectra of helium-like Ar16+ and provide ion temperature profiles with spatial and temporal resolutions of < 2 cm and ≥ 10 ms. The stellarator equilibrium reconstruction codes, STELLOPT and PIES, will be used for the tomographic inversion of the spectral data. The spectrometer layout and instrumental features are largely determined by the magnetic field structure of LHD.

  4. A Forward Glimpse into Inverse Problems through a Geology Example

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2012-01-01

    This paper describes a forward approach to an inverse problem related to detecting the nature of geological substrata which makes use of optimization techniques in a multivariable calculus setting. The true nature of the related inverse problem is highlighted. (Contains 2 figures.)

  5. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  6. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  7. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal or less than 300x300x300 nodes, and it under-samples the wavefield reducing the number of stored time-steps by an order of magnitude. For bigger models the wavefield is stored only at the boundaries of the model and then re-injected while the residuals are backpropagated allowing to compute the correlation 'on the fly'. In terms of computational resource, the elastic code is an order of magnitude more demanding than the equivalent acoustic code. We have combined shared memory with distributed memory parallelisation using OpenMP and MPI respectively. Thus, we take advantage of the increasingly common multi-core architecture processors. We have successfully applied our inversion algorithm to different realistic complex 3D models. The models had non-linear relations between pressure and shear wave velocities. The shorter wavelengths of the shear waves improve the resolution of the images obtained with respect to a purely acoustic approach.

  8. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  9. An inverse problem in thermal imaging

    NASA Technical Reports Server (NTRS)

    Bryan, Kurt; Caudill, Lester F., Jr.

    1994-01-01

    This paper examines uniqueness and stability results for an inverse problem in thermal imaging. The goal is to identify an unknown boundary of an object by applying a heat flux and measuring the induced temperature on the boundary of the sample. The problem is studied both in the case in which one has data at every point on the boundary of the region and the case in which only finitely many measurements are available. An inversion procedure is developed and used to study the stability of the inverse problem for various experimental configurations.

  10. Inverse problems in quantum chemistry

    NASA Astrophysics Data System (ADS)

    Karwowski, Jacek

    Inverse problems constitute a branch of applied mathematics with well-developed methodology and formalism. A broad family of tasks met in theoretical physics, in civil and mechanical engineering, as well as in various branches of medical and biological sciences has been formulated as specific implementations of the general theory of inverse problems. In this article, it is pointed out that a number of approaches met in quantum chemistry can (and should) be classified as inverse problems. Consequently, the methodology used in these approaches may be enriched by applying ideas and theorems developed within the general field of inverse problems. Several examples, including the RKR method for the construction of potential energy curves, determining parameter values in semiempirical methods, and finding external potentials for which the pertinent Schrödinger equation is exactly solvable, are discussed in detail.

  11. Application of a stochastic inverse to the geophysical inverse problem

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.; Minster, J. B.

    1972-01-01

    The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.

  12. Analysis of space telescope data collection system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.

  13. Markov random field based automatic image alignment for electron tomography.

    PubMed

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  14. PREFACE: The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches

    NASA Astrophysics Data System (ADS)

    Cheng, Jin; Hon, Yiu-Chung; Seo, Jin Keun; Yamamoto, Masahiro

    2005-01-01

    The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches was held at Fudan University, Shanghai from 16-21 June 2004. The first conference in this series was held at the City University of Hong Kong in January 2002 and it was agreed to hold the conference once every two years in a Pan-Pacific Asian country. The next conference is scheduled to be held at Hokkaido University, Sapporo, Japan in July 2006. The purpose of this series of biennial conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries. In recent decades, interest in inverse problems has been flourishing all over the globe because of both the theoretical interest and practical requirements. In particular, in Asian countries, one is witnessing remarkable new trends of research in inverse problems as well as the participation of many young talents. Considering these trends, the second conference was organized with the chairperson Professor Li Tat-tsien (Fudan University), in order to provide forums for developing research cooperation and to promote activities in the field of inverse problems. Because solutions to inverse problems are needed in various applied fields, we entertained a total of 92 participants at the second conference and arranged various talks which ranged from mathematical analyses to solutions of concrete inverse problems in the real world. This volume contains 18 selected papers, all of which have undergone peer review. The 18 papers are classified as follows: Surveys: four papers give reviews of specific inverse problems. Theoretical aspects: six papers investigate the uniqueness, stability, and reconstruction schemes. Numerical methods: four papers devise new numerical methods and their applications to inverse problems. Solutions to applied inverse problems: four papers discuss concrete inverse problems such as scattering problems and inverse problems in atmospheric sciences and oceanography. Last but not least is our gratitude. As editors we would like to express our sincere thanks to all the plenary and invited speakers, the members of the International Scientific Committee and the Advisory Board for the success of the conference, which has given rise to this present volume of selected papers. We would also like to thank Mr Wang Yanbo, Miss Wan Xiqiong and the graduate students at Fudan University for their effective work to make this conference a success. The conference was financially supported by the NFS of China, the Mathematical Center of Ministry of Education of China, E-Institutes of Shanghai Municipal Education Commission (No E03004) and Fudan University, Grant 15340027 from the Japan Society for the Promotion of Science, and Grant 15654015 from the Ministry of Education, Cultures, Sports and Technology.

  15. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  16. A distribution-based parametrization for improved tomographic imaging of solute plumes

    USGS Publications Warehouse

    Pidlisecky, Adam; Singha, K.; Day-Lewis, F. D.

    2011-01-01

    Difference geophysical tomography (e.g. radar, resistivity and seismic) is used increasingly for imaging fluid flow and mass transport associated with natural and engineered hydrologic phenomena, including tracer experiments, in situ remediation and aquifer storage and recovery. Tomographic data are collected over time, inverted and differenced against a background image to produce 'snapshots' revealing changes to the system; these snapshots readily provide qualitative information on the location and morphology of plumes of injected tracer, remedial amendment or stored water. In principle, geometric moments (i.e. total mass, centres of mass, spread, etc.) calculated from difference tomograms can provide further quantitative insight into the rates of advection, dispersion and mass transfer; however, recent work has shown that moments calculated from tomograms are commonly biased, as they are strongly affected by the subjective choice of regularization criteria. Conventional approaches to regularization (Tikhonov) and parametrization (image pixels) result in tomograms which are subject to artefacts such as smearing or pixel estimates taking on the sign opposite to that expected for the plume under study. Here, we demonstrate a novel parametrization for imaging plumes associated with hydrologic phenomena. Capitalizing on the mathematical analogy between moment-based descriptors of plumes and the moment-based parameters of probability distributions, we design an inverse problem that (1) is overdetermined and computationally efficient because the image is described by only a few parameters, (2) produces tomograms consistent with expected plume behaviour (e.g. changes of one sign relative to the background image), (3) yields parameter estimates that are readily interpreted for plume morphology and offer direct insight into hydrologic processes and (4) requires comparatively few data to achieve reasonable model estimates. We demonstrate the approach in a series of numerical examples based on straight-ray difference-attenuation radar monitoring of the transport of an ionic tracer, and show that the methodology outlined here is particularly effective when limited data are available. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.

  17. Imaging of 3-D seismic velocity structure of Southern Sumatra region using double difference tomographic method

    NASA Astrophysics Data System (ADS)

    Lestari, Titik; Nugraha, Andri Dian

    2015-04-01

    Southern Sumatra region has a high level of seismicity due to the influence of the subduction system, Sumatra fault, Mentawai fault and stretching zone activities. The seismic activities of Southern Sumatra region are recorded by Meteorological Climatological and Geophysical Agency (MCGA's) Seismograph network. In this study, we used earthquake data catalog compiled by MCGA for 3013 events from 10 seismic stations around Southern Sumatra region for time periods of April 2009 - April 2014 in order to invert for the 3-D seismic velocities structure (Vp, Vs, and Vp/Vs ratio). We applied double-difference seismic tomography method (tomoDD) to determine Vp, Vs and Vp/Vs ratio with hypocenter adjustment. For the inversion procedure, we started from the initial 1-D seismic velocity model of AK135 and constant Vp/Vs of 1.73. The synthetic travel time from source to receiver was calculated using ray pseudo-bending technique, while the main tomographic inversion was applied using LSQR method. The resolution model was evaluated using checkerboard test and Derivative Weigh Sum (DWS). Our preliminary results show low Vp and Vs anomalies region along Bukit Barisan which is may be associated with weak zone of Sumatran fault and migration of partial melted material. Low velocity anomalies at 30-50 km depth in the fore arc region may indicated the hydrous material circulation because the slab dehydration. We detected low seismic seismicity in the fore arc region that may be indicated as seismic gap. It is coincides contact zone of high and low velocity anomalies. And two large earthquakes (Jambi and Mentawai) also occurred at the contact of contrast velocity.

  18. P and S velocity structure of the crust and the upper mantle beneath central Java from local tomography inversion

    NASA Astrophysics Data System (ADS)

    Koulakov, I.; Bohm, M.; Asch, G.; Lühr, B.-G.; Manzanares, A.; Brotopuspito, K. S.; Fauzi, Pak; Purbawinata, M. A.; Puspito, N. T.; Ratdomopurbo, A.; Kopp, H.; Rabbel, W.; Shevkunova, E.

    2007-08-01

    Here we present the results of local source tomographic inversion beneath central Java. The data set was collected by a temporary seismic network. More than 100 stations were operated for almost half a year. About 13,000 P and S arrival times from 292 events were used to obtain three-dimensional (3-D) Vp, Vs, and Vp/Vs models of the crust and the mantle wedge beneath central Java. Source location and determination of the 3-D velocity models were performed simultaneously based on a new iterative tomographic algorithm, LOTOS-06. Final event locations clearly image the shape of the subduction zone beneath central Java. The dipping angle of the slab increases gradually from almost horizontal to about 70°. A double seismic zone is observed in the slab between 80 and 150 km depth. The most striking feature of the resulting P and S models is a pronounced low-velocity anomaly in the crust, just north of the volcanic arc (Merapi-Lawu anomaly (MLA)). An algorithm for estimation of the amplitude value, which is presented in the paper, shows that the difference between the fore arc and MLA velocities at a depth of 10 km reaches 30% and 36% in P and S models, respectively. The value of the Vp/Vs ratio inside the MLA is more than 1.9. This shows a probable high content of fluids and partial melts within the crust. In the upper mantle we observe an inclined low-velocity anomaly which links the cluster of seismicity at 100 km depth with MLA. This anomaly might reflect ascending paths of fluids released from the slab. The reliability of all these patterns was tested thoroughly.

  19. Imaging of 3-D seismic velocity structure of Southern Sumatra region using double difference tomographic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lestari, Titik, E-mail: t2klestari@gmail.com; Faculty of Earth Science and Technology, Bandung Institute of Technology, Jalan Ganesa No.10, Bandung 40132; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    2015-04-24

    Southern Sumatra region has a high level of seismicity due to the influence of the subduction system, Sumatra fault, Mentawai fault and stretching zone activities. The seismic activities of Southern Sumatra region are recorded by Meteorological Climatological and Geophysical Agency (MCGA’s) Seismograph network. In this study, we used earthquake data catalog compiled by MCGA for 3013 events from 10 seismic stations around Southern Sumatra region for time periods of April 2009 – April 2014 in order to invert for the 3-D seismic velocities structure (Vp, Vs, and Vp/Vs ratio). We applied double-difference seismic tomography method (tomoDD) to determine Vp, Vsmore » and Vp/Vs ratio with hypocenter adjustment. For the inversion procedure, we started from the initial 1-D seismic velocity model of AK135 and constant Vp/Vs of 1.73. The synthetic travel time from source to receiver was calculated using ray pseudo-bending technique, while the main tomographic inversion was applied using LSQR method. The resolution model was evaluated using checkerboard test and Derivative Weigh Sum (DWS). Our preliminary results show low Vp and Vs anomalies region along Bukit Barisan which is may be associated with weak zone of Sumatran fault and migration of partial melted material. Low velocity anomalies at 30-50 km depth in the fore arc region may indicated the hydrous material circulation because the slab dehydration. We detected low seismic seismicity in the fore arc region that may be indicated as seismic gap. It is coincides contact zone of high and low velocity anomalies. And two large earthquakes (Jambi and Mentawai) also occurred at the contact of contrast velocity.« less

  20. An efficient algorithm for double-difference tomography and location in heterogeneous media, with an application to the Kilauea volcano

    USGS Publications Warehouse

    Monteiller, V.; Got, J.-L.; Virieux, J.; Okubo, P.

    2005-01-01

    Improving our understanding of crustal processes requires a better knowledge of the geometry and the position of geological bodies. In this study we have designed a method based upon double-difference relocation and tomography to image, as accurately as possible, a heterogeneous medium containing seismogenic objects. Our approach consisted not only of incorporating double difference in tomography but also partly in revisiting tomographic schemes for choosing accurate and stable numerical strategies, adapted to the use of cross-spectral time delays. We used a finite difference solution to the eikonal equation for travel time computation and a Tarantola-Valette approach for both the classical and double-difference three-dimensional tomographic inversion to find accurate earthquake locations and seismic velocity estimates. We estimated efficiently the square root of the inverse model's covariance matrix in the case of a Gaussian correlation function. It allows the use of correlation length and a priori model variance criteria to determine the optimal solution. Double-difference relocation of similar earthquakes is performed in the optimal velocity model, making absolute and relative locations less biased by the velocity model. Double-difference tomography is achieved by using high-accuracy time delay measurements. These algorithms have been applied to earthquake data recorded in the vicinity of Kilauea and Mauna Loa volcanoes for imaging the volcanic structures. Stable and detailed velocity models are obtained: the regional tomography unambiguously highlights the structure of the island of Hawaii and the double-difference tomography shows a detailed image of the southern Kilauea caldera-upper east rift zone magmatic complex. Copyright 2005 by the American Geophysical Union.

  1. Teleseismic traveltime tomography of Jeju Island, South Korea

    NASA Astrophysics Data System (ADS)

    Song, J.; Rhie, J.; Kim, S.; Lee, S. H.

    2017-12-01

    Jeju Island is the largest volcanic island in South Korea, which lies off the south coast of the Korean Peninsula. It is well known that the volcanism started in the Early Pleistocene (c. 1.7 Ma) and subsequent eruptions during Late Pleistocene to Holocene formed the bulk of the island with a number of small cones. However, the origin of magma and detailed mechanism of eruptions have not been fully understood yet. To address these issues, we applied teleseismic travel time tomography to image the underlying crust and upper mantle of the island. We carefully analyzed 185 teleseismic earthquakes (5.5 < Mw < 7.9) that occurred between Oct. 2013 and Nov. 2015. Broadband waveforms recorded by 23 seismic stations covering the whole island were used to measure travel time residuals of P and S waves using semi-automated adaptive stacking technique. The residuals are mapped as three-dimensional perturbations of velocity using iterative non-linear tomographic process with a subspace inversion technique and the fast marching method for grid based eikonal solver. We used AK135 global reference model as a starting velocity model for tomography inversion. The resulting P wave tomographic images exhibit relatively low velocity anomaly in the upper mantle, which extends to depths of nearly 60 km under the summit of the island, Mt. Halla. The anomaly is likely related to a relatively high-temperature magmatic body, which might be associated to the volcanism lasted until late Cenozoic. To better constrain possible compositions of the anomalies and the existence of melt fractions, we will continue to examine perturbation of Vp/Vs ratios and discuss the evolution of the volcanic island.

  2. Anisotropic Lithospheric layering in the North American craton, revealed by Bayesian inversion of short and long period data

    NASA Astrophysics Data System (ADS)

    Roy, Corinna; Calo, Marco; Bodin, Thomas; Romanowicz, Barbara

    2016-04-01

    Competing hypotheses for the formation and evolution of continents are highly under debate, including the theory of underplating by hot plumes or accretion by shallow subduction in continental or arc settings. In order to support these hypotheses, documenting structural layering in the cratonic lithosphere becomes especially important. Recent studies of seismic-wave receiver function data have detected a structural boundary under continental cratons at 100-140 km depths, which is too shallow to be consistent with the lithosphere-asthenosphere boundary, as inferred from seismic tomography and other geophysical studies. This leads to the conclusion that 1) the cratonic lithosphere may be thinner than expected, contradicting tomographic and other geophysical or geochemical inferences, or 2) that the receiver function studies detect a mid-lithospheric discontinuity rather than the LAB. On the other hand, several recent studies documented significant changes in the direction of azimuthal anisotropy with depth that suggest layering in the anisotropic structure of the stable part of the North American continent. In particular, Yuan and Romanowicz (2010) combined long period surface wave and overtone data with core refracted shear wave (SKS) splitting measurements in a joint tomographic inversion. A question that arises is whether the anisotropic layering observed coincides with that obtained from receiver function studies. To address this question, we use a trans-dimensional Markov-chain Monte Carlo (MCMC) algorithm to generate probabilistic 1D radially and azimuthal anisotropic shear wave velocity profiles for selected stations in North America. In the algorithm we jointly invert short period (Ps Receiver Functions, surface wave dispersion for Love and Rayleigh waves) and long period data (SKS waveforms). By including three different data types, which sample different volumes of the Earth and have different sensitivities to 
structure, we overcome the problem of incompatible interpretations of models provided by only one data set. The resulting 1D profiles include both isotropic and anisotropic discontinuities in the upper mantle (above 350 km depth). The huge advantage of our procedure is the avoidance of any intermediate processing steps such as numerical deconvolution or the calculation of splitting parameters, which can be very sensitive to noise. Additionally, the number of layers, as well as the data noise and the presence of anisotropy are treated as unknowns in the transdimensional Monte Carlo Markov chain algorithm. We recently demonstrated the power of this approach in the case of two stations located in different tectonic settings (Bodin et al., 2015, submitted). Here we extend this approach to a broader range of settings within the north American continent.

  3. A systematic linear space approach to solving partially described inverse eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Hu, Sau-Lon James; Li, Haujun

    2008-06-01

    Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.

  4. Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1985-01-01

    The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.

  5. Tomographic imaging of flourescence resonance energy transfer in highly light scattering media

    NASA Astrophysics Data System (ADS)

    Soloviev, Vadim Y.; McGinty, James; Tahir, Khadija B.; Laine, Romain; Stuckey, Daniel W.; Mohan, P. Surya; Hajnal, Joseph V.; Sardini, Alessandro; French, Paul M. W.; Arridge, Simon R.

    2010-02-01

    Three-dimensional localization of protein conformation changes in turbid media using Förster Resonance Energy Transfer (FRET) was investigated by tomographic fluorescence lifetime imaging (FLIM). FRET occurs when a donor fluorophore, initially in its electronic excited state, transfers energy to an acceptor fluorophore in close proximity through non-radiative dipole-dipole coupling. An acceptor effectively behaves as a quencher of the donor's fluorescence. The quenching process is accompanied by a reduction in the quantum yield and lifetime of the donor fluorophore. Therefore, FRET can be localized by imaging changes in the quantum yield and the fluorescence lifetime of the donor fluorophore. Extending FRET to diffuse optical tomography has potentially important applications such as in vivo studies in small animal. We show that FRET can be localized by reconstructing the quantum yield and lifetime distribution from time-resolved non-invasive boundary measurements of fluorescence and transmitted excitation radiation. Image reconstruction was obtained by an inverse scattering algorithm. Thus we report, to the best of our knowledge, the first tomographic FLIM-FRET imaging in turbid media. The approach is demonstrated by imaging a highly scattering cylindrical phantom concealing two thin wells containing cytosol preparations of HEK293 cells expressing TN-L15, a cytosolic genetically-encoded calcium FRET sensor. A 10mM calcium chloride solution was added to one of the wells to induce a protein conformation change upon binding to TN-L15, resulting in FRET and a corresponding decrease in the donor fluorescence lifetime. The resulting fluorescence lifetime distribution, the quantum efficiency, absorption and scattering coefficients were reconstructed.

  6. GPU acceleration towards real-time image reconstruction in 3D tomographic diffractive microscopy

    NASA Astrophysics Data System (ADS)

    Bailleul, J.; Simon, B.; Debailleul, M.; Liu, H.; Haeberlé, O.

    2012-06-01

    Phase microscopy techniques regained interest in allowing for the observation of unprepared specimens with excellent temporal resolution. Tomographic diffractive microscopy is an extension of holographic microscopy which permits 3D observations with a finer resolution than incoherent light microscopes. Specimens are imaged by a series of 2D holograms: their accumulation progressively fills the range of frequencies of the specimen in Fourier space. A 3D inverse FFT eventually provides a spatial image of the specimen. Consequently, acquisition then reconstruction are mandatory to produce an image that could prelude real-time control of the observed specimen. The MIPS Laboratory has built a tomographic diffractive microscope with an unsurpassed 130nm resolution but a low imaging speed - no less than one minute. Afterwards, a high-end PC reconstructs the 3D image in 20 seconds. We now expect an interactive system providing preview images during the acquisition for monitoring purposes. We first present a prototype implementing this solution on CPU: acquisition and reconstruction are tied in a producer-consumer scheme, sharing common data into CPU memory. Then we present a prototype dispatching some reconstruction tasks to GPU in order to take advantage of SIMDparallelization for FFT and higher bandwidth for filtering operations. The CPU scheme takes 6 seconds for a 3D image update while the GPU scheme can go down to 2 or > 1 seconds depending on the GPU class. This opens opportunities for 4D imaging of living organisms or crystallization processes. We also consider the relevance of GPU for 3D image interaction in our specific conditions.

  7. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  8. A mega Ultra Low Velocity Zone at the Base of the Iceland Plume: a Target for Tomographic Telescope Implementation

    NASA Astrophysics Data System (ADS)

    Romanowicz, Barbara; Yuan, Kaiqing; Masson, Yder; Adourian, Sevan

    2017-04-01

    We have recently constructed the first global whole mantle radially anisotropic shear wave velocity model based on time domain full waveform inversion and numerical wavefield computations using the Spectral Element Method (French et al., 2013; French and Romanowicz, 2014). This model's most salient features are broad chimney-like low velocity conduits, rooted within the large-low-shear-velocity provinces (LLSVPs) at the base of the mantle, and extending from the core-mantle boundary up through most of the lower mantle, projecting to the earth's surface in the vicinity of major hotspots. The robustness of these features is confirmed through several non-linear synthetic tests, which we present here, including several iterations of inversion using a different starting model than that which served for the published model. The roots of these not-so-classical "plumes" are regions of more pronounced low shear velocity. While the detailed structure is not yet resolvable tomographically, at least two of them contain large (>800 km diameter) ultra-low-velocity zones (ULVZs), one under Hawaii (Cottaar and Romanowicz, 2012) and the other one under Samoa (Thorne et al., 2013). Through 3D numerical forward modelling of Sdiff phases down to 10s period, using data from broadband arrays illuminating the base of the Iceland plume from different directions, we show that such a large ULVZ also exists at the root of this plume, embedded within a taller region of moderately reduced low shear velocity, such as proposed by He et al. (2015). We also show that such a wide, but localized ULVZ is unique in a broad region around the base of the Iceland Plume. Because of the intense computational effort required for forward modelling of trial structures, to first order this ULVZ is represented by a cylindrical structure of diameter 900 km, height 20 km and velocity reduction 20%. To further refine the model, we have developed a technique which we call "tomographic telescope", in which we are able to compute the teleseismic wavefield down to periods of 10s only once, while subsequent iterations require numerical wavefield computations only within the target region, in this case, around the base of the Iceland plume. We describe the method and preliminary results of its implementation.

  9. Upper-mantle seismic structure in a region of incipient continental breakup: northern Ethiopian rift

    NASA Astrophysics Data System (ADS)

    Bastow, Ian D.; Stuart, Graham W.; Kendall, J.-Michael; Ebinger, Cynthia J.

    2005-08-01

    The northern Ethiopian rift forms the third arm of the Red Sea, Gulf of Aden triple junction, and marks the transition from continental rifting in the East African rift to incipient oceanic spreading in Afar. We determine the P- and S-wave velocity structure beneath the northern Ethiopian rift using independent tomographic inversion of P- and S-wave relative arrival-time residuals from teleseismic earthquakes recorded by the Ethiopia Afar Geoscientific Lithospheric Experiment (EAGLE) passive experiment using the regularised non-linear least-squares inversion method of VanDecar. Our 79 broad-band instruments covered an area 250 × 350 km centred on the Boset magmatic segment ~70 km SE of Addis Ababa in the centre of the northern Ethiopian rift. The study area encompasses several rift segments showing increasing degrees of extension and magmatic intrusion moving from south to north into the Afar depression. Analysis of relative arrival-time residuals shows that the rift flanks are asymmetric with arrivals associated with the southeastern Somalian Plate faster (~0.65 s for the P waves; ~2 s for the S waves) than the northwestern Nubian Plate. Our tomographic inversions image a 75 km wide tabular low-velocity zone (δVP~-1.5 per cent, δVS~-4 per cent) beneath the less-evolved southern part of the rift in the uppermost 200-250 km of the mantle. At depths of >100 km, north of 8.5°N, this low-velocity anomaly broadens laterally and appears to be connected to deeper low-velocity structures under the Afar depression. An off-rift low-velocity structure extending perpendicular to the rift axis correlates with the eastern limit of the E-W trending reactivated Precambrian Ambo-Guder fault zone that is delineated by Quaternary eruptive centres. Along axis, the low-velocity upwelling beneath the rift is segmented, with low-velocity material in the uppermost 100 km often offset to the side of the rift with the highest rift flank topography. Our observations from this magmatic rift zone, which is transitional between continental and oceanic rifting, do not support detachment fault models of lithospheric extension but instead point to strain accommodation via magma assisted rifting.

  10. FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems

    NASA Astrophysics Data System (ADS)

    Vourc'h, Eric; Rodet, Thomas

    2015-11-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.

  11. Preliminary results of local earthquake tomography around Bali, Lombok, and Sumbawa regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id; Puspito, Nanang T; Yudistira, Tedi

    Bali, Sumbawa, and Lombok regions are located in active tectonic influence by Indo-Australia plate subducts beneath Sunda plate in southern part and local back-arc thrust in northern part the region. Some active volcanoes also lie from eastern part of Java, Bali, Lombok and Sumbawa regions. Previous studies have conducted subsurface seismic velocity imaging using regional and global earthquake data around the region. In this study, we used P-arrival time from local earthquake networks compiled by MCGA, Indonesia within time periods of 2009 up to 2013 to determine seismic velocity structure and simultaneously hypocenter adjustment by applying seismic tomography inversion method.more » For the tomographic inversion procedure, we started from 1-D initial velocity structure. We evaluated the resolution of tomography inversion results through checkerboard test and calculating derivative weigh sum. The preliminary results of tomography inversion show fairly clearly high seismic velocity subducting Indo-Australian and low velocity anomaly around volcano regions. The relocated hypocenters seem to cluster around the local fault system such as back-arc thrust fault in northern part of the region and around local fault in Sumbawa regions. Our local earthquake tomography results demonstrated consistent with previous studies and improved the resolution. For future works, we will determine S-wave velocity structure using S-wave arrival time to enhance our understanding of geological processes and for much better interpretation.« less

  12. Preliminary results of local earthquake tomography around Bali, Lombok, and Sumbawa regions

    NASA Astrophysics Data System (ADS)

    Nugraha, Andri Dian; Kusnandar, Ridwan; Puspito, Nanang T.; Sakti, Artadi Pria; Yudistira, Tedi

    2015-04-01

    Bali, Sumbawa, and Lombok regions are located in active tectonic influence by Indo-Australia plate subducts beneath Sunda plate in southern part and local back-arc thrust in northern part the region. Some active volcanoes also lie from eastern part of Java, Bali, Lombok and Sumbawa regions. Previous studies have conducted subsurface seismic velocity imaging using regional and global earthquake data around the region. In this study, we used P-arrival time from local earthquake networks compiled by MCGA, Indonesia within time periods of 2009 up to 2013 to determine seismic velocity structure and simultaneously hypocenter adjustment by applying seismic tomography inversion method. For the tomographic inversion procedure, we started from 1-D initial velocity structure. We evaluated the resolution of tomography inversion results through checkerboard test and calculating derivative weigh sum. The preliminary results of tomography inversion show fairly clearly high seismic velocity subducting Indo-Australian and low velocity anomaly around volcano regions. The relocated hypocenters seem to cluster around the local fault system such as back-arc thrust fault in northern part of the region and around local fault in Sumbawa regions. Our local earthquake tomography results demonstrated consistent with previous studies and improved the resolution. For future works, we will determine S-wave velocity structure using S-wave arrival time to enhance our understanding of geological processes and for much better interpretation.

  13. New Additions to the Toolkit for Forward/Inverse Problems in Electrocardiography within the SCIRun Problem Solving Environment.

    PubMed

    Coll-Font, Jaume; Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrel J; Wang, Dafang; Brooks, Dana H; van Dam, Peter; Macleod, Rob S

    2014-09-01

    Cardiac electrical imaging often requires the examination of different forward and inverse problem formulations based on mathematical and numerical approximations of the underlying source and the intervening volume conductor that can generate the associated voltages on the surface of the body. If the goal is to recover the source on the heart from body surface potentials, the solution strategy must include numerical techniques that can incorporate appropriate constraints and recover useful solutions, even though the problem is badly posed. Creating complete software solutions to such problems is a daunting undertaking. In order to make such tools more accessible to a broad array of researchers, the Center for Integrative Biomedical Computing (CIBC) has made an ECG forward/inverse toolkit available within the open source SCIRun system. Here we report on three new methods added to the inverse suite of the toolkit. These new algorithms, namely a Total Variation method, a non-decreasing TMP inverse and a spline-based inverse, consist of two inverse methods that take advantage of the temporal structure of the heart potentials and one that leverages the spatial characteristics of the transmembrane potentials. These three methods further expand the possibilities of researchers in cardiology to explore and compare solutions to their particular imaging problem.

  14. Normal-pressure hydrocephalus and the saga of the treatable dementias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedland, R.P.

    1989-11-10

    A case study of a 74-year-old woman is presented which illustrates the difficulty of understanding dementing illnesses. A diagnosis of normal-pressure hydrocephalus (NPH) was made because of the development of abnormal gait, with urinary incontinence and severe, diffuse, white matter lesions on the MRI scan. Computed tomographic, MRI scans and positron emission tomographic images of glucose use are presented. The treatable dementias are a large, multifaceted group of illnesses, of which NPH is one. The author proposes a new term for this disorder commonly known as NPH because the problem with the term normal-pressure hydrocephalus is that the cerebrospinal fluidmore » pressure is not always normal in the disease.« less

  15. Children's Understanding of the Inverse Relation between Multiplication and Division

    ERIC Educational Resources Information Center

    Robinson, Katherine M.; Dube, Adam K.

    2009-01-01

    Children's understanding of the inversion concept in multiplication and division problems (i.e., that on problems of the form "d multiplied by e/e" no calculations are required) was investigated. Children in Grades 6, 7, and 8 completed an inversion problem-solving task, an assessment of procedures task, and a factual knowledge task of simple…

  16. A Volunteer Computing Project for Solving Geoacoustic Inversion Problems

    NASA Astrophysics Data System (ADS)

    Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya

    2017-12-01

    A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.

  17. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  18. The State of Stress Beyond the Borehole

    NASA Astrophysics Data System (ADS)

    Johnson, P. A.; Coblentz, D. D.; Maceira, M.; Delorey, A. A.; Guyer, R. A.

    2015-12-01

    The state of stress controls all in-situ reservoir activities and yet we lack the quantitative means to measure it. This problem is important in light of the fact that the subsurface provides more than 80 percent of the energy used in the United States and serves as a reservoir for geological carbon sequestration, used fuel disposition, and nuclear waste storage. Adaptive control of subsurface fractures and fluid flow is a crosscutting challenge being addressed by the new Department of Energy SubTER Initiative that has the potential to transform subsurface energy production and waste storage strategies. Our methodology to address the above mentioned matter is based on a novel Advance Multi-Physics Tomographic (AMT) approach for determining the state of stress, thereby facilitating our ability to monitor and control subsurface geomechanical processes. We developed the AMT algorithm for deriving state-of-stress from integrated density and seismic velocity models and demonstrate the feasibility by applying the AMT approach to synthetic data sets to assess accuracy and resolution of the method as a function of the quality and type of geophysical data. With this method we can produce regional- to basin-scale maps of the background state of stress and identify regions where stresses are changing. Our approach is based on our major advances in the joint inversion of gravity and seismic data to obtain the elastic properties for the subsurface; and coupling afterwards the output from this joint-inversion with theoretical model such that strain (and subsequently) stress can be computed. Ultimately we will obtain the differential state of stress over time to identify and monitor critically stressed faults and evolving regions within the reservoir, and relate them to anthropogenic activities such as fluid/gas injection.

  19. Ultrasonic multi-skip tomography for pipe inspection

    NASA Astrophysics Data System (ADS)

    Volker, Arno; Vos, Rik; Hunter, Alan; Lorenz, Maarten

    2012-05-01

    The inspection of wall loss corrosion is difficult at pipe support locations due to limited accessibility. However, the recently developed ultrasonic Multi-Skip screening technique is suitable for this problem. The method employs ultrasonic transducers in a pitch-catch geometry positioned on opposite sides of the pipe support. Shear waves are transmitted in the axial direction within the pipe wall, reflecting multiple times between the inner and outer surfaces before reaching the receivers. Along this path, the signals accumulate information on the integral wall thickness (e.g., via variations in travel time). The method is very sensitive in detecting the presence of wall loss, but it is difficult to quantify both the extent and depth of the loss. If the extent is unknown, then only a conservative estimate of the depth can be made due to the cumulative nature of the travel time variations. Multi-Skip tomography is an extension of Multi-Skip screening and has shown promise as a complimentary follow-up inspection technique. In recent work, we have developed the technique and demonstrated its use for reconstructing high-resolution estimates of pipe wall thickness profiles. The method operates via a model-based full wave field inversion; this consists of a forward model for predicting the measured wave field and an iterative process that compares the predicted and measured wave fields and minimizes the differences with respect to the model parameters (i.e., the wall thickness profile). This paper presents our recent developments in Multi-Skip tomographic inversion, focusing on the initial localization of corrosion regions for efficient parameterization of the surface profile model and utilization of the signal phase information for improving resolution.

  20. Implement Method for Automated Testing of Markov Chain Convergence into INVERSE for ORNL12-RS-108J: Advanced Multi-Dimensional Forward and Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bledsoe, Keith C.

    2015-04-01

    The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less

  1. GPS Water Vapor Tomography: First results from the ESCOMPTE Field Experiment

    NASA Astrophysics Data System (ADS)

    Masson, F.; Champollion, C.; Bouin, M.-N.; Walpersdorf, A.; van Baelen, J.; Doerflinger, E.; Bock, O.

    2003-04-01

    We develop a tomographic software to model the spatial distribution of the tropospheric water vapor from GPS data. First we present simulations based on a real GPS station distribution and simple tropospheric models, which prove the potentiality of the method. Second we apply the software to the ESCOMPTE data. During the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers has been operated for two weeks within a 20 km x 20 km area around Marseille (Southern France). The network extends from the sea level to the top of the Etoile chain (~700 m high). The input data are the slant delay values obtained by combining the estimated zenith delay values with the horizontal gradients. The effect of the initial tropospheric water vapor model, the number and thickness of the layers of the model, the a priori model and data covariance and some other parameters will be discussed. Simultaneously water vapor radiometer, solar spectrometer, Raman lidar and radiosondes have been deployed to get a data set usable for comparison with the tomographic inversion results and validation of the method. Comparison with meteorological models (MesoNH - Meteo-France) will be shown.

  2. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  3. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction.

    PubMed

    Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc

    2017-11-01

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE PAGES

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...

    2017-07-03

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  5. Uncertainty analysis in seismic tomography

    NASA Astrophysics Data System (ADS)

    Owoc, Bartosz; Majdański, Mariusz

    2017-04-01

    Velocity field from seismic travel time tomography depends on several factors like regularization, inversion path, model parameterization etc. The result also strongly depends on an initial velocity model and precision of travel times picking. In this research we test dependence on starting model in layered tomography and compare it with effect of picking precision. Moreover, in our analysis for manual travel times picking the uncertainty distribution is asymmetric. This effect is shifting the results toward faster velocities. For calculation we are using JIVE3D travel time tomographic code. We used data from geo-engineering and industrial scale investigations, which were collected by our team from IG PAS.

  6. A Spatially Resolving X-ray Crystal Spectrometer for Measurement of Ion-temperature and Rotation-velocity Profiles on the AlcatorC-Mod Tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, K. W.; Bitter, M. L.; Scott, S. D.

    2009-03-24

    A new spatially resolving x-ray crystal spectrometer capable of measuring continuous spatial profiles of high resolution spectra (λ/dλ > 6000) of He-like and H-like Ar Kα lines with good spatial (~1 cm) and temporal (~10 ms) resolutions has been installed on the Alcator C-Mod tokamak. Two spherically bent crystals image the spectra onto four two-dimensional Pilatus II pixel detectors. Tomographic inversion enables inference of local line emissivity, ion temperature (Ti), and toroidal plasma rotation velocity (vφ) from the line Doppler widths and shifts. The data analysis techniqu

  7. Objectives and layout of a high-resolution x-ray imaging crystal spectrometer for the large helical device.

    PubMed

    Bitter, M; Hill, K; Gates, D; Monticello, D; Neilson, H; Reiman, A; Roquemore, A L; Morita, S; Goto, M; Yamada, H; Rice, J E

    2010-10-01

    A high-resolution x-ray imaging crystal spectrometer, whose concept was tested on NSTX and Alcator C-Mod, is being designed for the large helical device (LHD). This instrument will record spatially resolved spectra of helium-like Ar(16+) and will provide ion temperature profiles with spatial and temporal resolutions of <2 cm and ≥10 ms, respectively. The spectrometer layout and instrumental features are largely determined by the magnetic field structure of LHD. The stellarator equilibrium reconstruction codes, STELLOPT and PIES, will be used for the tomographic inversion of the spectral data.

  8. Tomographic inversion of time-domain resistivity and chargeability data for the investigation of landfills using a priori information.

    PubMed

    De Donno, Giorgio; Cardarelli, Ettore

    2017-01-01

    In this paper, we present a new code for the modelling and inversion of resistivity and chargeability data using a priori information to improve the accuracy of the reconstructed model for landfill. When a priori information is available in the study area, we can insert them by means of inequality constraints on the whole model or on a single layer or assigning weighting factors for enhancing anomalies elongated in the horizontal or vertical directions. However, when we have to face a multilayered scenario with numerous resistive to conductive transitions (the case of controlled landfills), the effective thickness of the layers can be biased. The presented code includes a model-tuning scheme, which is applied after the inversion of field data, where the inversion of the synthetic data is performed based on an initial guess, and the absolute difference between the field and synthetic inverted models is minimized. The reliability of the proposed approach has been supported in two real-world examples; we were able to identify an unauthorized landfill and to reconstruct the geometrical and physical layout of an old waste dump. The combined analysis of the resistivity and chargeability (normalised) models help us to remove ambiguity due to the presence of the waste mass. Nevertheless, the presence of certain layers can remain hidden without using a priori information, as demonstrated by a comparison of the constrained inversion with a standard inversion. The robustness of the above-cited method (using a priori information in combination with model tuning) has been validated with the cross-section from the construction plans, where the reconstructed model is in agreement with the original design. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Simultaneous, Joint Inversion of Seismic Body Wave Travel Times and Satellite Gravity Data for Three-Dimensional Tomographic Imaging of Western Colombia

    NASA Astrophysics Data System (ADS)

    Dionicio, V.; Rowe, C. A.; Maceira, M.; Zhang, H.; Londoño, J.

    2009-12-01

    We report on the three-dimensional seismic structure of western Colombia determined through the use of a new, simultaneous, joint inversion tomography algorithm. Using data recorded by the national Seismological Network of Colombia (RSNC), we have selected 3,609 earthquakes recorded at 33 sensors distributed throughout the country, with additional data from stations in neighboring countries. 20,338 P-wave arrivals and 17,041 S-wave arrivals are used to invert for structure within a region extending approximately 72.5 to 77.5 degrees West and 2 to 7.5 degrees North. Our algorithm is a modification of the Maceira and Ammon joint inversion code, in combination with the Zhang and Thurber TomoDD (double-difference tomography) program, with a fast LSQR solver operating on the gridded values jointly. The inversion uses gravity anomalies obtained during the GRACE2 satellite mission, and solves using these values with the seismic travel-times through application of an empirical relationship first proposed by Harkrider, mapping densities to Vp and Vs within earth materials. In previous work, Maceira and Ammon demonstrated that incorporation of gravity data predicts shear wave velocities more accurately than the inversion of surface waves alone, particularly in regions where the crust exhibits abrupt and significant lateral variations in lithology, such as the Tarim Basin. The significant complexity of crustal structure in Colombia, due to its active tectonic environment, makes it a good candidate for the application with gravity and body waves. We present the results of this joint inversion and compare it to results obtained using travel times alone

  10. Characterizing crustal and uppermost mantle anisotropy with a depth-dependent tilted hexagonally symmetric elastic tensor: theory and examples

    NASA Astrophysics Data System (ADS)

    Feng, L.; Xie, J.; Ritzwoller, M. H.

    2017-12-01

    Two major types of surface wave anisotropy are commonly observed by seismologists but are only rarely interpreted jointly: apparent radial anisotropy, which is the difference in propagation speed between horizontally and vertically polarized waves inferred from Love and Rayleigh waves, and apparent azimuthal anisotropy, which is the directional dependence of surface wave speeds (usually Rayleigh waves). We describe a method of inversion that interprets simultaneous observations of radial and azimuthal anisotropy under the assumption of a hexagonally symmetric elastic tensor with a tilted symmetry axis defined by dip and strike angles. With a full-waveform numerical solver based on the spectral element method (SEM), we verify the validity of the forward theory used for the inversion. We also present two examples, in the US and Tibet, in which we have successfully applied the tomographic method to demonstrate that the two types of apparent anisotropy can be interpreted jointly as a tilted hexagonally symmetric medium.

  11. Inverse Scattering Problem For The Schrödinger Equation With An Additional Quadratic Potential On The Entire Axis

    NASA Astrophysics Data System (ADS)

    Guseinov, I. M.; Khanmamedov, A. Kh.; Mamedova, A. F.

    2018-04-01

    We consider the Schrödinger equation with an additional quadratic potential on the entire axis and use the transformation operator method to study the direct and inverse problems of the scattering theory. We obtain the main integral equations of the inverse problem and prove that the basic equations are uniquely solvable.

  12. Assessing non-uniqueness: An algebraic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, Don W.

    Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.

  13. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    NASA Astrophysics Data System (ADS)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.

  14. The inverse problem of refraction travel times, part I: Types of Geophysical Nonuniqueness through Minimization

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.

    2005-01-01

    In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.

  15. Computational methods for inverse problems in geophysics: inversion of travel time observations

    USGS Publications Warehouse

    Pereyra, V.; Keller, H.B.; Lee, W.H.K.

    1980-01-01

    General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.

  16. A fixed energy fixed angle inverse scattering in interior transmission problem

    NASA Astrophysics Data System (ADS)

    Chen, Lung-Hui

    2017-06-01

    We study the inverse acoustic scattering problem in mathematical physics. The problem is to recover the index of refraction in an inhomogeneous medium by measuring the scattered wave fields in the far field. We transform the problem to the interior transmission problem in the study of the Helmholtz equation. We find an inverse uniqueness on the scatterer with a knowledge of a fixed interior transmission eigenvalue. By examining the solution in a series of spherical harmonics in the far field, we can determine uniquely the perturbation source for the radially symmetric perturbations.

  17. Seeing is believing: video classification for computed tomographic colonography using multiple-instance learning.

    PubMed

    Wang, Shijun; McKenna, Matthew T; Nguyen, Tan B; Burns, Joseph E; Petrick, Nicholas; Sahiner, Berkman; Summers, Ronald M

    2012-05-01

    In this paper, we present development and testing results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) system. Inspired by the interpretative methodology of radiologists using 3-D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. For each CAD mark, we created a video composed of a series of intraluminal, volume-rendered images visualizing the detection from multiple viewpoints. We then framed the video classification question as a multiple-instance learning (MIL) problem. Since a positive (negative) bag may contain negative (positive) instances, which in our case depends on the viewing angles and camera distance to the target, we developed a novel MIL paradigm to accommodate this class of problems. We solved the new MIL problem by maximizing a L2-norm soft margin using semidefinite programming, which can optimize relevant parameters automatically. We tested our method by analyzing a CTC data set obtained from 50 patients from three medical centers. Our proposed method showed significantly better performance compared with several traditional MIL methods.

  18. Seeing is Believing: Video Classification for Computed Tomographic Colonography Using Multiple-Instance Learning

    PubMed Central

    Wang, Shijun; McKenna, Matthew T.; Nguyen, Tan B.; Burns, Joseph E.; Petrick, Nicholas; Sahiner, Berkman

    2012-01-01

    In this paper we present development and testing results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) system. Inspired by the interpretative methodology of radiologists using 3D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. For each CAD mark, we created a video composed of a series of intraluminal, volume-rendered images visualizing the detection from multiple viewpoints. We then framed the video classification question as a multiple-instance learning (MIL) problem. Since a positive (negative) bag may contain negative (positive) instances, which in our case depends on the viewing angles and camera distance to the target, we developed a novel MIL paradigm to accommodate this class of problems. We solved the new MIL problem by maximizing a L2-norm soft margin using semidefinite programming, which can optimize relevant parameters automatically. We tested our method by analyzing a CTC data set obtained from 50 patients from three medical centers. Our proposed method showed significantly better performance compared with several traditional MIL methods. PMID:22552333

  19. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  20. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  1. The neural network approximation method for solving multidimensional nonlinear inverse problems of geophysics

    NASA Astrophysics Data System (ADS)

    Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.

    2017-07-01

    The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.

  2. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    NASA Astrophysics Data System (ADS)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  3. Modern Workflow Full Waveform Inversion Applied to North America and the Northern Atlantic

    NASA Astrophysics Data System (ADS)

    Krischer, Lion; Fichtner, Andreas; Igel, Heiner

    2015-04-01

    We present the current state of a new seismic tomography model obtained using full waveform inversion of the crustal and upper mantle structure beneath North America and the Northern Atlantic, including the westernmost part of Europe. Parts of the eastern portion of the initial model consists of previous models by Fichtner et al. (2013) and Rickers et al. (2013). The final results of this study will contribute to the 'Comprehensive Earth Model' being developed by the Computational Seismology group at ETH Zurich. Significant challenges include the size of the domain, the uneven event and station coverage, and the strong east-west alignment of seismic ray paths across the North Atlantic. We use as much data as feasible, resulting in several thousand recordings per event depending on the receivers deployed at the earthquakes' origin times. To manage such projects in a reproducible and collaborative manner, we, as tomographers, should abandon ad-hoc scripts and one-time programs, and adopt sustainable and reusable solutions. Therefore we developed the LArge-scale Seismic Inversion Framework (LASIF - http://lasif.net), an open-source toolbox for managing seismic data in the context of non-linear iterative inversions that greatly reduces the time to research. Information on the applied processing, modelling, iterative model updating, what happened during each iteration, and so on are systematically archived. This results in a provenance record of the final model which in the end significantly enhances the reproducibility of iterative inversions. Additionally, tools for automated data download across different data centers, window selection, misfit measurements, parallel data processing, and input file generation for various forward solvers are provided.

  4. FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)

    NASA Astrophysics Data System (ADS)

    2014-10-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet

  5. Regridding reconstruction algorithm for real-time tomographic imaging

    PubMed Central

    Marone, F.; Stampanoni, M.

    2012-01-01

    Sub-second temporal-resolution tomographic microscopy is becoming a reality at third-generation synchrotron sources. Efficient data handling and post-processing is, however, difficult when the data rates are close to 10 GB s−1. This bottleneck still hinders exploitation of the full potential inherent in the ultrafast acquisition speed. In this paper the fast reconstruction algorithm gridrec, highly optimized for conventional CPU technology, is presented. It is shown that gridrec is a valuable alternative to standard filtered back-projection routines, despite being based on the Fourier transform method. In fact, the regridding procedure used for resampling the Fourier space from polar to Cartesian coordinates couples excellent performance with negligible accuracy degradation. The stronger dependence of the observed signal-to-noise ratio for gridrec reconstructions on the number of angular views makes the presented algorithm even superior to filtered back-projection when the tomographic problem is well sampled. Gridrec not only guarantees high-quality results but it provides up to 20-fold performance increase, making real-time monitoring of the sub-second acquisition process a reality. PMID:23093766

  6. Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768

  7. Microwave tomography for GPR data processing in archaeology and cultural heritages diagnostics

    NASA Astrophysics Data System (ADS)

    Soldovieri, F.

    2009-04-01

    Ground Penetrating Radar (GPR) is one of the most feasible and friendly instrumentation to detect buried remains and perform diagnostics of archaeological structures with the aim of detecting hidden objects (defects, voids, constructive typology; etc..). In fact, GPR technique allows to perform measurements over large areas in a very fast way thanks to a portable instrumentation. Despite of the widespread exploitation of the GPR as data acquisition system, many difficulties arise in processing GPR data so to obtain images reliable and easily interpretable by the end-users. This difficulty is exacerbated when no a priori information is available as for example arises in the case of historical heritages for which the knowledge of the constructive modalities and materials of the structure might be completely missed. A possible answer to the above cited difficulties resides in the development and the exploitation of microwave tomography algorithms [1, 2], based on more refined electromagnetic scattering model with respect to the ones usually adopted in the classic radaristic approach. By exploitation of the microwave tomographic approach, it is possible to gain accurate and reliable "images" of the investigated structure in order to detect, localize and possibly determine the extent and the geometrical features of the embedded objects. In this framework, the adoption of simplified models of the electromagnetic scattering appears very convenient for practical and theoretical reasons. First, the linear inversion algorithms are numerically efficient thus allowing to investigate domains large in terms of the probing wavelength in a quasi real- time also in the case of 3D case also by adopting schemes based on the combination of 2D reconstruction [3]. In addition, the solution approaches are very robust against the uncertainties in the parameters of the measurement configuration and on the investigated scenario. From a theoretical point of view, the linear models allow further advantages such as: the absence of the false solutions (a question to be arisen in non linear inverse problems); the exploitation of well known regularization tools for achieving a stable solution of the problem; the possibility to analyze the reconstruction performances of the algorithm once the measurement configuration and the properties of the host medium are known. Here, we will present the main features and the reconstruction results of a linear inversion algorithm based on the Born approximation in realistic applications in archaeology and cultural heritage diagnostics. Born model is useful when penetrable objects are under investigations. As well known, the Born Approximation is used to solve the forward problem, that is the determination of the scattered field from a known object under the hypothesis of weak scatterer, i.e. an object whose dielectric permittivity is slightly different from the one of the host medium and whose extent is small in term of probing wavelength. Differently, for the inverse scattering problem, the above hypotheses can be relaxed at the cost to renounce to a "quantitative reconstruction" of the object. In fact, as already shown by results in realistic conditions [4, 5], the adoption of a Born model inversion scheme allows to detect, to localize and to determine the geometry of the object also in the case of not weak scattering objects. [1] R. Persico, R. Bernini, F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the Born approximation", IEEE Trans. Antennas and Propagation, vol. 53, no.6, pp. 1875-1887, June 2005. [2] F. Soldovieri, J. Hugenschmidt, R. Persico and G. Leone, "A linear inverse scattering algorithm for realistic GPR applications", Near Surface Geophysics, vol. 5, no. 1, pp. 29-42, February 2007. [3] R. Solimene, F. Soldovieri, G. Prisco, R.Pierri, "Three-Dimensional Microwave Tomography by a 2-D Slice-Based Reconstruction Algorithm", IEEE Geoscience and Remote Sensing Letters, vol. 4, no. 4, pp. 556 - 560, Oct. 2007. [4] L. Orlando, F. Soldovieri, "Two different approaches for georadar data processing: a case study in archaeological prospecting", Journal of Applied Geophysics, vol. 64, pp. 1-13, March 2008. [5] F. Soldovieri, M. Bavusi, L. Crocco, S. Piscitelli, A. Giocoli, F. Vallianatos, S. Pantellis, A. Sarris, "A comparison between two GPR data processing techniques for fracture detection and characterization", Proc. of 70th EAGE Conference & Exhibition, Rome, Italy, 9 - 12 June 2008

  8. Inverse problems in the design, modeling and testing of engineering systems

    NASA Technical Reports Server (NTRS)

    Alifanov, Oleg M.

    1991-01-01

    Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.

  9. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  10. Combining energy and Laplacian regularization to accurately retrieve the depth of brain activity of diffuse optical tomographic data

    NASA Astrophysics Data System (ADS)

    Chiarelli, Antonio M.; Maclin, Edward L.; Low, Kathy A.; Mathewson, Kyle E.; Fabiani, Monica; Gratton, Gabriele

    2016-03-01

    Diffuse optical tomography (DOT) provides data about brain function using surface recordings. Despite recent advancements, an unbiased method for estimating the depth of absorption changes and for providing an accurate three-dimensional (3-D) reconstruction remains elusive. DOT involves solving an ill-posed inverse problem, requiring additional criteria for finding unique solutions. The most commonly used criterion is energy minimization (energy constraint). However, as measurements are taken from only one side of the medium (the scalp) and sensitivity is greater at shallow depths, the energy constraint leads to solutions that tend to be small and superficial. To correct for this bias, we combine the energy constraint with another criterion, minimization of spatial derivatives (Laplacian constraint, also used in low resolution electromagnetic tomography, LORETA). Used in isolation, the Laplacian constraint leads to solutions that tend to be large and deep. Using simulated, phantom, and actual brain activation data, we show that combining these two criteria results in accurate (error <2 mm) absorption depth estimates, while maintaining a two-point spatial resolution of <24 mm up to a depth of 30 mm. This indicates that accurate 3-D reconstruction of brain activity up to 30 mm from the scalp can be obtained with DOT.

  11. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  12. Three-dimensional full-field X-ray orientation microscopy

    PubMed Central

    Viganò, Nicola; Tanguy, Alexandre; Hallais, Simon; Dimanov, Alexandre; Bornert, Michel; Batenburg, Kees Joost; Ludwig, Wolfgang

    2016-01-01

    A previously introduced mathematical framework for full-field X-ray orientation microscopy is for the first time applied to experimental near-field diffraction data acquired from a polycrystalline sample. Grain by grain tomographic reconstructions using convex optimization and prior knowledge are carried out in a six-dimensional representation of position-orientation space, used for modelling the inverse problem of X-ray orientation imaging. From the 6D reconstruction output we derive 3D orientation maps, which are then assembled into a common sample volume. The obtained 3D orientation map is compared to an EBSD surface map and local misorientations, as well as remaining discrepancies in grain boundary positions are quantified. The new approach replaces the single orientation reconstruction scheme behind X-ray diffraction contrast tomography and extends the applicability of this diffraction imaging technique to material micro-structures exhibiting sub-grains and/or intra-granular orientation spreads of up to a few degrees. As demonstrated on textured sub-regions of the sample, the new framework can be extended to operate on experimental raw data, thereby bypassing the concept of orientation indexation based on diffraction spot peak positions. This new method enables fast, three-dimensional characterization with isotropic spatial resolution, suitable for time-lapse observations of grain microstructures evolving as a function of applied strain or temperature. PMID:26868303

  13. Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method

    DOE PAGES

    Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...

    2017-11-20

    The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less

  14. SXR measurement and W transport survey using GEM tomographic system on WEST

    NASA Astrophysics Data System (ADS)

    Mazon, D.; Jardin, A.; Malard, P.; Chernyshova, M.; Coston, C.; Malard, P.; O'Mullane, M.; Czarski, T.; Malinowski, K.; Faisse, F.; Ferlay, F.; Verger, J. M.; Bec, A.; Larroque, S.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.

    2017-11-01

    Measuring Soft X-Ray (SXR) radiation (0.1-20 keV) of fusion plasmas is a standard way of accessing valuable information on particle transport. Since heavy impurities like tungsten (W) could degrade plasma core performances and cause radiative collapses, it is necessary to develop new diagnostics to be able to monitor the impurity distribution in harsh fusion environments like ITER. A gaseous detector with energy discrimination would be a very good candidate for this purpose. The design and implementation of a new SXR diagnostic developed for the WEST project, based on a triple Gas Electron Multiplier (GEM) detector is presented. This detector works in photon counting mode and presents energy discrimination capabilities. The SXR system is composed of two 1D cameras (vertical and horizontal views respectively), located in the same poloidal cross-section to allow for tomographic reconstruction. An array (20 cm × 2 cm) consists of up to 128 detectors in front of a beryllium pinhole (equipped with a 1 mm diameter diaphragm) inserted at about 50 cm depth inside a cooled thimble in order to retrieve a wide plasma view. Acquisition of low energy spectrum is insured by a helium buffer installed between the pinhole and the detector. Complementary cooling systems (water) are used to maintain a constant temperature (25oC) inside the thimble. Finally a real-time automatic extraction system has been developed to protect the diagnostic during baking phases or any overheating unwanted events. Preliminary simulations of plasma emissivity and W distribution have been performed for WEST using a recently developed synthetic diagnostic coupled to a tomographic algorithm based on the minimum Fisher information (MFI) inversion method. First GEM acquisitions are presented as well as estimation of transport effect in presence of ICRH on W density reconstruction capabilities of the GEM.

  15. Using a derivative-free optimization method for multiple solutions of inverse transport problems

    DOE PAGES

    Armstrong, Jerawan C.; Favorite, Jeffrey A.

    2016-01-14

    Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less

  16. Frnakenstein: multiple target inverse RNA folding.

    PubMed

    Lyngsø, Rune B; Anderson, James W J; Sizikova, Elena; Badugu, Amarendra; Hyland, Tomas; Hein, Jotun

    2012-10-09

    RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein.

  17. Frnakenstein: multiple target inverse RNA folding

    PubMed Central

    2012-01-01

    Background RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. Results In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Conclusions Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein. PMID:23043260

  18. Surface-Wave Tomographic Studies of the Hudson Bay Lithosphere: Implications for Paleoproterozoic Tectonic Processes and the Assembly of the Canadian Shield

    NASA Astrophysics Data System (ADS)

    Darbyshire, F. A.

    2015-12-01

    Hudson Bay is a shallow intracratonic basin that partially conceals the Trans-Hudson Orogen (THO) in northern Canada. The THO is thought to be a Himalayan-scale Paleoproterozoic orogenic event that was an important component of assembly of the Canadian Shield, marking the collision of the Archean Superior and Western Churchill plates. Until recently, only global and continental-scale seismic tomographic models had imaged the upper-mantle structure of the region, giving a broad but relatively low-resolution picture of the thick lithospheric keel. The Hudson Bay Lithospheric Experiment (HuBLE) investigated the present-day seismic structure beneath Hudson Bay and its surroundings, using a distributed broadband seismograph network installed around the periphery of the Bay and complemented by existing permanent and temporary seismographs further afield. This configuration, though not optimal for body-wave studies which use subvertical arrivals, is well-suited to surface wave tomographic techniques, with many paths crossing the Bay. As there is little seismicity in the region around the Canadian Shield, two-station measurements of teleseismic Rayleigh wave phase velocity formed the principal data set for lithospheric studies. The interstation measurements were combined in a linearized tomographic inversion for maps of phase velocity and azimuthal anisotropy at periods of 20-200 s; these maps were then used to calculate a pseudo-3D anisotropic upper-mantle shear-wavespeed model of the region. The model shows thick (~180-260 km), seismically fast lithosphere across the Hudson Bay region, with a near-vertical 'curtain' of lower wavespeeds trending NE-SW across the Bay, likely associated with more juvenile material trapped between the Archean Superior and Churchill continental cores during the THO. The lithosphere is layered, suggesting a 2-stage formation process. Seismic anisotropy patterns vary with depth; a circular pattern in the uppermost mantle wrapping around the Hudson Bay basin is superseded in the lower lithosphere by a pattern that mirrors THO-related structures within the crust; the lower layer thus likely formed when stress patterns related to the THO were still active.

  19. Sparsity-driven tomographic reconstruction of atmospheric water vapor using GNSS and InSAR observations

    NASA Astrophysics Data System (ADS)

    Heublein, Marion; Alshawaf, Fadwa; Zhu, Xiao Xiang; Hinz, Stefan

    2016-04-01

    An accurate knowledge of the 3D distribution of water vapor in the atmosphere is a key element for weather forecasting and climate research. On the other hand, as water vapor causes a delay in the microwave signal propagation within the atmosphere, a precise determination of water vapor is required for accurate positioning and deformation monitoring using Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). However, due to its high variability in time and space, the atmospheric water vapor distribution is difficult to model. Since GNSS meteorology was introduced about twenty years ago, it has increasingly been used as a geodetic technique to generate maps of 2D Precipitable Water Vapor (PWV). Moreover, several approaches for 3D tomographic water vapor reconstruction from GNSS-based estimates using the simple least squares adjustment were presented. In this poster, we present an innovative and sophisticated Compressive Sensing (CS) concept for sparsity-driven tomographic reconstruction of 3D atmospheric wet refractivity fields using data from GNSS and InSAR. The 2D zenith wet delay (ZWD) estimates are obtained by a combination of point-wise estimates of the wet delay using GNSS observations and partial InSAR wet delay maps. These ZWD estimates are aggregated to derive realistic wet delay input data of 100 points as if corresponding to 100 GNSS sites within an area of 100 km × 100 km in the test region of the Upper Rhine Graben. The made-up ZWD values can be mapped into different elevation and azimuth angles. Using the Cosine transform, a sparse representation of the wet refractivity field is obtained. In contrast to existing tomographic approaches, we exploit sparsity as a prior for the regularization of the underdetermined inverse system. The new aspects of this work include both the combination of GNSS and InSAR data for water vapor tomography and the sophisticated CS estimation. The accuracy of the estimated 3D water vapor field is determined by comparing slant integrated wet delays computed from the estimated wet refractivities with real GNSS wet delay estimates. This comparison is performed along different elevation and azimuth angles.

  20. First Calderón Prize

    NASA Astrophysics Data System (ADS)

    Rundell, William; Somersalo, Erkki

    2008-07-01

    The Inverse Problems International Association (IPIA) awarded the first Calderón Prize to Matti Lassas for his outstanding contributions to the field of inverse problems, especially in geometric inverse problems. The Calderón Prize is given to a researcher under the age of 40 who has made distinguished contributions to the field of inverse problems broadly defined. The first Calderón Prize Committee consisted of Professors Adrian Nachman, Lassi Päivärinta, William Rundell (chair), and Michael Vogelius. William Rundell For the Calderón Prize Committee Prize ceremony The ceremony awarding the Calderón Prize. Matti Lassas is on the left. He and William Rundell are on the right. Photos by P Stefanov. Brief Biography of Matti Lassas Matti Lassas was born in 1969 in Helsinki, Finland, and studied at the University of Helsinki. He finished his Master's studies in 1992 in three years and earned his PhD in 1996. His PhD thesis, written under the supervision of Professor Erkki Somersalo was entitled `Non-selfadjoint inverse spectral problems and their applications to random bodies'. Already in his thesis, Matti demonstrated a remarkable command of different fields of mathematics, bringing together the spectral theory of operators, geometry of Riemannian surfaces, Maxwell's equations and stochastic analysis. He has continued to develop all of these branches in the framework of inverse problems, the most remarkable results perhaps being in the field of differential geometry and inverse problems. Matti has always been a very generous researcher, sharing his ideas with his numerous collaborators. He has authored over sixty scientific articles, among which a monograph on inverse boundary spectral problems with Alexander Kachalov and Yaroslav Kurylev and over forty articles in peer reviewed journals of the highest standards. To get an idea of the wide range of Matti's interests, it is enough to say that he also has three US patents on medical imaging applications. Matti is currently professor of mathematics at Helsinki University of Technology, where he has created his own line of research with young talented researchers around him. He is a central person in the Centre of Excellence in Inverse Problems Research of the Academy of Finland. Previously, Matti Lassas has won several awards in his home country, including the prestigious Vaisala price of the Finnish Academy of Science and Letters in 2004. He is a highly esteemed colleague, teacher and friend, and the Great Diving Beetle of the Finnish Inverse Problems Society (http://venda.uku.fi/research/FIPS/), an honorary title for a person who has no fear of the deep. Erkki Somersalo

  1. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  2. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  3. PREFACE: Inverse Problems in Applied Sciences—towards breakthrough

    NASA Astrophysics Data System (ADS)

    Cheng, Jin; Iso, Yuusuke; Nakamura, Gen; Yamamoto, Masahiro

    2007-06-01

    These are the proceedings of the international conference `Inverse Problems in Applied Sciences—towards breakthrough' which was held at Hokkaido University, Sapporo, Japan on 3-7 July 2006 (http://coe.math.sci.hokudai.ac.jp/sympo/inverse/). There were 88 presentations and more than 100 participants, and we are proud to say that the conference was very successful. Nowadays, many new activities on inverse problems are flourishing at many centers of research around the world, and the conference has successfully gathered a world-wide variety of researchers. We believe that this volume contains not only main papers, but also conveys the general status of current research into inverse problems. This conference was the third biennial international conference on inverse problems, the core of which is the Pan-Pacific Asian area. The purpose of this series of conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries, and to lead the organization of activities concerning inverse problems centered in East Asia. The first conference was held at City University of Hong Kong in January 2002 and the second was held at Fudan University in June 2004. Following the preceding two successes, the third conference was organized in order to extend the scope of activities and build useful bridges to the next conference in Seoul in 2008. Therefore this third biennial conference was intended not only to establish collaboration and links between researchers in Asia and leading researchers worldwide in inverse problems but also to nurture interdisciplinary collaboration in theoretical fields such as mathematics, applied fields and evolving aspects of inverse problems. For these purposes, we organized tutorial lectures, serial lectures and a panel discussion as well as conference research presentations. This volume contains three lecture notes from the tutorial and serial lectures, and 22 papers. Especially at this flourishing time, it is necessary to carefully analyse the current status of inverse problems for further development. Thus we have opened with the panel discussion entitled `Future of Inverse Problems' with panelists: Professors J Cheng, H W Engl, V Isakov, R Kress, J-K Seo, G Uhlmann and the commentator: Elaine Longden-Chapman from IOP Publishing. The aims of the panel discussion were to examine the current research status from various viewpoints, to discuss how we can overcome any difficulties and how we can promote young researchers and open new possibilities for inverse problems such as industrial linkages. As one output, the panel discussion has triggered the organization of the Inverse Problems International Association (IPIA) which has led to its first international congress in the summer of 2007. Another remarkable outcome of the conference is, of course, the present volume: this is the very high quality online proceedings volume of Journal of Physics: Conference Series. Readers can see in these proceedings very well written tutorial lecture notes, and very high quality original research and review papers all of which show what was achieved by the time the conference was held. The electronic publication of the proceedings is a new way of publicizing the achievement of the conference. It has the advantage of wide circulation and cost reduction. We believe this is a most efficient method for our needs and purposes. We would like to take this opportunity to acknowledge all the people who helped to organize the conference. Guest Editors Jin Cheng, Fudan University, Shanghai, China Yuusuke Iso, Kyoto University, Kyoto, Japan Gen Nakamura, Hokkaido University, Sapporo, Japan Masahiro Yamamoto, University of Tokyo, Tokyo, Japan

  4. A forward model and conjugate gradient inversion technique for low-frequency ultrasonic imaging.

    PubMed

    van Dongen, Koen W A; Wright, William M D

    2006-10-01

    Emerging methods of hyperthermia cancer treatment require noninvasive temperature monitoring, and ultrasonic techniques show promise in this regard. Various tomographic algorithms are available that reconstruct sound speed or contrast profiles, which can be related to temperature distribution. The requirement of a high enough frequency for adequate spatial resolution and a low enough frequency for adequate tissue penetration is a difficult compromise. In this study, the feasibility of using low frequency ultrasound for imaging and temperature monitoring was investigated. The transient probing wave field had a bandwidth spanning the frequency range 2.5-320.5 kHz. The results from a forward model which computed the propagation and scattering of low-frequency acoustic pressure and velocity wave fields were used to compare three imaging methods formulated within the Born approximation, representing two main types of reconstruction. The first uses Fourier techniques to reconstruct sound-speed profiles from projection or Radon data based on optical ray theory, seen as an asymptotical limit for comparison. The second uses backpropagation and conjugate gradient inversion methods based on acoustical wave theory. The results show that the accuracy in localization was 2.5 mm or better when using low frequencies and the conjugate gradient inversion scheme, which could be used for temperature monitoring.

  5. A two-dimensional analysis of the sensitivity of a pulse first break to wave speed contrast on a scale below the resolution length of ray tomography.

    PubMed

    Willey, Carson L; Simonetti, Francesco

    2016-06-01

    Mapping the speed of mechanical waves traveling inside a medium is a topic of great interest across many fields from geoscience to medical diagnostics. Much work has been done to characterize the fidelity with which the geometrical features of the medium can be reconstructed and multiple resolution criteria have been proposed depending on the wave-matter interaction model used to decode the wave speed map from scattering measurements. However, these criteria do not define the accuracy with which the wave speed values can be reconstructed. Using two-dimensional simulations, it is shown that the first-arrival traveltime predicted by ray theory can be an accurate representation of the arrival of a pulse first break even in the presence of diffraction and other phenomena that are not accounted for by ray theory. As a result, ray-based tomographic inversions can yield accurate wave speed estimations also when the size of a sound speed anomaly is smaller than the resolution length of the inversion method provided that traveltimes are estimated from the signal first break. This increased sensitivity however renders the inversion more susceptible to noise since the amplitude of the signal around the first break is typically low especially when three-dimensional anomalies are considered.

  6. Improved Abel transform inversion: First application to COSMIC/FORMOSAT-3

    NASA Astrophysics Data System (ADS)

    Aragon-Angel, A.; Hernandez-Pajares, M.; Juan, J.; Sanz, J.

    2007-05-01

    In this paper the first results of Ionospheric Tomographic inversion are presented, using the Improved Abel Transform on the COSMIC/FORMOSAT-3 constellation of 6 LEO satellites, carrying on-board GPS receivers.[- 4mm] The Abel transform inversion is a wide used technique which in the ionospheric context makes it possible to retrieve electron densities as a function of height based of STEC (Slant Total Electron Content) data gathered from GPS receivers on board of LEO (Low Earth Orbit) satellites. Within this precise use, the classical approach of the Abel inversion is based on the assumption of spherical symmetry of the electron density in the vicinity of an occultation, meaning that the electron content varies in height but not horizontally. In particular, one implication of this assumption is that the VTEC (Vertical Total Electron Content) is a constant value for the occultation region. This assumption may not always be valid since horizontal ionospheric gradients (a very frequent feature in some ionosphere problematic areas such as the Equatorial region) could significantly affect the electron profiles. [- 4mm] In order to overcome this limitation/problem of the classical Abel inversion, a studied improvement of this technique can be obtained by assuming separability in the electron density (see Hernández-Pajares et al. 2000). This means that the electron density can be expressed by the multiplication of VTEC data and a shape function which assumes all the height dependency in it while the VTEC data keeps the horizontal dependency. Actually, it is more realistic to assume that this shape fuction depends only on the height and to use VTEC information to take into account the horizontal variation rather than considering spherical symmetry in the electron density function as it has been carried out in the classical approach of the Abel inversion.[-4mm] Since the above mentioned improved Abel inversion technique has already been tested and proven to be a useful tool to obtain a vertical description of the ionospheric electron density (see García-Fernández et al. 2003), a natural following step would be to extend the use of this technique to the recently available COSMIC data. The COSMIC satellite constellation, formed by 6 micro-satellites, is being deployed since April 2006 in circular orbit around the Earth, with a final altitude of about 700-800 kilometers. Its global and almost uniform coverage will overcome one of the main limitations of this technique which is the sparcity of data, related to lack of GPS receivers in some regions. This can significantly stimulate the development of radio occultation techniques with the use of the huge volume of data provided by the COSMIC constellation to be processed and analysed updating the current knowledge of the Ionospheres nature and behaviour. In this context a summary of the Improvel Abel transform inversion technique and the first results based on COSMIC constellation data will be presented. Moreover, future improvements, taking into account the higher temporal and global spatial coverage, will be discussed. [-4mm] References:M. Hernández-Pajares, J. M. Juan and J. Sanz, Improving the Abel inversion by adding ground GPS data to LEO radio occultations in ionospheric sounding, GEOPHYSICAL RESEARCH LETTERS, VOL. 27, NO. 16, PAGES 2473-2476, AUGUST 15, 2000.M. Garcia-Fernández, M. Hernández-Pajares, M. Juan, and J. Sanz, Improvement of ionospheric electron density estimation with GPSMET occultations using Abel inversion and VTEC Information, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 108, NO. A9, 1338, doi:10.1029/2003JA009952, 2003

  7. Developing Tools to Test the Thermo-Mechanical Models, Examples at Crustal and Upper Mantle Scale

    NASA Astrophysics Data System (ADS)

    Le Pourhiet, L.; Yamato, P.; Burov, E.; Gurnis, M.

    2005-12-01

    Testing geodynamical model is never an easy task. Depending on the spatio-temporal scale of the model, different testable predictions are needed and no magic reciepe exist. This contribution first presents different methods that have been used to test themo-mechanical modeling results at upper crustal, lithospheric and upper mantle scale using three geodynamical examples : the Gulf of Corinth (Greece), the Western Alps, and the Sierra Nevada. At short spatio-temporal scale (e.g. Gulf of Corinth). The resolution of the numerical models is usually sufficient to catch the timing and kinematics of the faults precisely enough to be tested by tectono-stratigraphic arguments. In active deforming area, microseismicity can be compared to the effective rheology and P and T axes of the focal mechanism can be compared with local orientation of the major component of the stress tensor. At lithospheric scale the resolution of the models doesn't permit anymore to constrain the models by direct observations (i.e. structural data from field or seismic reflection). Instead, synthetic P-T-t path may be computed and compared to natural ones in term of rate of exhumation for ancient orogens. Topography may also help but on continent it mainly depends on erosion laws that are complicated to constrain. Deeper in the mantle, the only available constrain are long wave length topographic data and tomographic "data". The major problem to overcome now at lithospheric and upper mantle scale, is that the so called "data" results actually from inverse models of the real data and that those inverse model are based on synthetic models. Post processing P and S wave velocities is not sufficient to be able to make testable prediction at upper mantle scale. Instead of that, direct wave propagations model must be computed. This allows checking if the differences between two models constitute a testable prediction or not. On longer term, we may be able to use those synthetic models to reduce the residue in the inversion of elastic wave arrival time

  8. Big Data and High-Performance Computing in Global Seismology

    NASA Astrophysics Data System (ADS)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2014-05-01

    Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data formats for seismograms and outputs of our 3D solvers (i.e., meshes, kernels, seismic models, etc.) based on ORNL's ADIOS libraries. We will discuss our global adjoint tomography workflow on HPC systems as well as the current status of our global inversions.

  9. Using seismically constrained magnetotelluric inversion to recover velocity structure in the shallow lithosphere

    NASA Astrophysics Data System (ADS)

    Moorkamp, M.; Fishwick, S.; Jones, A. G.

    2015-12-01

    Typical surface wave tomography can recover well the velocity structure of the upper mantle in the depth range between 70-200km. For a successful inversion, we have to constrain the crustal structure and assess the impact on the resulting models. In addition,we often observe potentially interesting features in the uppermost lithosphere which are poorly resolved and thus their interpretationhas to be approached with great care.We are currently developing a seismically constrained magnetotelluric (MT) inversion approach with the aim of better recovering the lithospheric properties (and thus seismic velocities) in these problematic areas. We perform a 3D MT inversion constrained by a fixed seismic velocity model from surface wave tomography. In order to avoid strong bias, we only utilize information on structural boundaries to combine these two methods. Within the region that is well resolved by both methods, we can then extract a velocity-conductivity relationship. By translating the conductivitiesretrieved from MT into velocities in areas where the velocity model is poorly resolved, we can generate an updated velocity model and test what impactthe updated velocities have on the predicted data.We test this new approach using a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons togetherwith a tomographic models for the region. Here, both datasets have previously been used to constrain lithospheric structure and show some similarities.We carefully asses the validity of our results by comparing with observations and petrophysical predictions for the conductivity-velocity relationship.

  10. Surface wave tomography of Europe from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Stehly, Laurent; Paul, Anne

    2017-04-01

    We present a European scale high-resolution 3-D shear wave velocity model derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous seismic recordings from 1293 stations across much of the European region (10˚W-35˚E, 30˚N-75˚N), which yields more than 0.8 million virtual station pairs. This data set compiles records from 67 seismic networks, both permanent and temporary from the EIDA (European Integrated Data Archive). Rayleigh wave group velocity are measured at each station pair using the multiple-filter analysis technique. Group velocity maps are estimated through a linearized tomographic inversion algorithm at period from 5s to 100s. Adaptive parameterization is used to accommodate heterogeneity in data coverage. We then apply a two-step data-driven inversion method to obtain the shear wave velocity model. The two steps refer to a Monte Carlo inversion to build the starting model, followed by a linearized inversion for further improvement. Finally, Moho depth (and its uncertainty) are determined over most of our study region by identifying and analysing sharp velocity discontinuities (and sharpness). The resulting velocity model shows good agreement with main geological features and previous geophyical studies. Moho depth coincides well with that obtained from active seismic experiments. A focus on the Greater Alpine region (covered by the AlpArray seismic network) displays a clear crustal thinning that follows the arcuate shape of the Alps from the southern French Massif Central to southern Germany.

  11. Solvability of the electrocardiology inverse problem for a moving dipole.

    PubMed

    Tolkachev, V; Bershadsky, B; Nemirko, A

    1993-01-01

    New formulations of the direct and inverse problems for the moving dipole are offered. It has been suggested to limit the study by a small area on the chest surface. This lowers the role of the medium inhomogeneity. When formulating the direct problem, irregular components are considered. The algorithm of simultaneous determination of the dipole and regular noise parameters has been described and analytically investigated. It is shown that temporal overdetermination of the equations offers a single solution of the inverse problem for the four leads.

  12. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  14. MAP Estimators for Piecewise Continuous Inversion

    DTIC Science & Technology

    2016-08-08

    MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP

  15. Time-domain full waveform inversion using instantaneous phase information with damping

    NASA Astrophysics Data System (ADS)

    Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun

    2018-06-01

    In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.

  16. Solutions to inverse plume in a crosswind problem using a predictor - corrector method

    NASA Astrophysics Data System (ADS)

    Vanderveer, Joseph; Jaluria, Yogesh

    2013-11-01

    Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.

  17. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  18. Small-scale convection beneath the transverse ranges, California: Implications for interpretation of gravity anomalies

    NASA Technical Reports Server (NTRS)

    Humphreys, E. D.; Hager, B. H.

    1985-01-01

    Tomographic inversion of upper mantle P wave velocity heterogeneities beneath southern California shows two prominent features: an east-west trending curtain of high velocity material (up to 3% fast) in the upper 250 km beneath the Transverse Ranges and a region of low velocity material (up to 4% slow) in the 100 km beneath the Salton Trough. These seismic velocity anomalies were interpreted as due to small scale convection in the mantle. Using this hypothesis and assuming that temperature and density anomalies are linearly related to seismic velocity anomalies through standard coefficients of proportionality, leads to inferred variations of approx. + or - 300 C and approx. + or - 0.03 g/cc.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, K W; Delgado-Aprico, L; Johnson, D

    Imaging XCS arrays are being developed as a US-ITER activity for Doppler measurement of Ti and v profiles of impurities (W, Kr, Fe) with ~7 cm (a/30) and 10-100 ms resolution in ITER. The imaging XCS, modeled after a PPPL-MIT instrument on Alcator C-Mod, uses a spherically bent crystal and 2d x-ray detectors to achieve high spectral resolving power (E/dE>6000) horizontally and spatial imaging vertically. Two arrays will measure Ti and both poloidal and toroidal rotation velocity profiles. Measurement of many spatial chords permits tomographic inversion for inference of local parameters. The instrument design, predictions of performance, and results frommore » C-Mod will be presented.« less

  20. Acoustic Inversion in Optoacoustic Tomography: A Review

    PubMed Central

    Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel

    2013-01-01

    Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060

  1. A 3D Tomographic Model of Asia Based on Pn and P Travel Times from GT Events

    NASA Astrophysics Data System (ADS)

    Young, C. J.; Begnaud, M. L.; Ballard, S.; Phillips, W. S.; Hipp, J. R.; Steck, L. K.; Rowe, C. A.; Chang, M. C.

    2008-12-01

    Increasingly, nuclear explosion monitoring is focusing on detection, location, and identification of small events recorded at regional distances. Because Earth structure is highly variable on regional scales, locating events accurately at these distances requires the use of region-specific models to provide accurate travel times. Improved results have been achieved with composites of 1D models and with approximate 3D models with simplified upper mantle structures, but both approaches introduce non-physical boundaries that are problematic for operational monitoring use. Ultimately, what is needed is a true, seamless 3D model of the Earth. Towards that goal, we have developed a 3D tomographic model of the P velocity of the crust and mantle for the Asian continent. Our model is derived by an iterative least squares travel time inversion of more than one million Pn and teleseismic P picks from some 35,000 events recorded at 4,000+ stations. We invert for P velocities from the top of the crust to the core mantle boundary, along with source and receiver static time terms to account for the effects of event mislocation and unaccounted for fine-scale structure near the receiver. Because large portions of the model are under-constrained, we apply spatially varying damping, which constrains the inversion to update the starting model only where good data coverage is available. Our starting crustal model is taken from the a priori crust and upper mantle model of Asia developed through National Nuclear Security Administration laboratory collaboration, which is based on various global and regional studies, and we substantially increase the damping in the crust to discourage changes from this model. Our starting mantle model is AK135. To simplify the inversion, we fix the depths of the major mantle discontinuities (Moho, 410 km, 660 km). 3D rays are calculated using an implementation of the Um and Thurber ray pseudo-bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of our ray tracer, we are forced to substantially damp the inversion in order to converge on a reasonable model. We apply both horizontal and vertical regularization to produce smooth models with velocity feature scale lengths that are consistent with established conventions for mantle velocity structure. To investigate the importance of using true 3D rays for the inversion, as opposed to proxy rays through a reference model, we compare our model and ray paths with the model and ray paths resulting from inverting the same data set using rays traced through a 1D reference model. Finally, we validate the model by performing several inversions with random portions of the data set omitted and then testing the predictive capability of the model against those portions compared with AK135. We test the location performance of the model by relocating the GT events using our model and using AK135. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.

  2. Review of the inverse scattering problem at fixed energy in quantum mechanics

    NASA Technical Reports Server (NTRS)

    Sabatier, P. C.

    1972-01-01

    Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.

  3. Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gary D. Egbert

    2007-03-22

    The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less

  4. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  5. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  6. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  7. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  8. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  9. The incomplete inverse and its applications to the linear least squares problem

    NASA Technical Reports Server (NTRS)

    Morduch, G. E.

    1977-01-01

    A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.

  10. Computing the Sensitivity Kernels for 2.5-D Seismic Waveform Inversion in Heterogeneous, Anisotropic Media

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, S. A.

    2011-10-01

    2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.

  11. Broadband Ground Motion Synthesis of the 1999 Turkey Earthquakes Based On: 3-D Velocity Inversion, Finite Difference Calculations and Emprical Greens Functions

    NASA Astrophysics Data System (ADS)

    Gok, R.; Kalafat, D.; Hutchings, L.

    2003-12-01

    We analyze over 3,500 aftershocks recorded by several seismic networks during the 1999 Marmara, Turkey earthquakes. The analysis provides source parameters of the aftershocks, a three-dimensional velocity structure from tomographic inversion, an input three-dimensional velocity model for a finite difference wave propagation code (E3D, Larsen 1998), and records available for use as empirical Green's functions. Ultimately our goal is to model the 1999 earthquakes from DC to 25 Hz and study fault rupture mechanics and kinematic rupture models. We performed the simultaneous inversion for hypocenter locations and three-dimensional P- and S- wave velocity structure of Marmara Region using SIMULPS14 along with 2,500 events with more than eight P- readings and an azimuthal gap of less than 180\\deg. The resolution of calculated velocity structure is better in the eastern Marmara than the western Marmara region due to the dense ray coverage. We used the obtained velocity structure as input into the finite difference algorithm and validated the model by using M < 4 earthquakes as point sources and matching long period waveforms (f < 0.5 Hz). We also obtained Mo, fc and individual station kappa values for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquakes (M < 4.0) to obtain empirical Green's function (EGF) for the higher frequency range of ground motion synthesis (0.5 < f > 25 Hz). We additionally obtained the source scaling relation (energy-moment) of these aftershocks. We have generated several scenarios constrained by a priori knowledge of the Izmit and Duzce rupture parameters to validate our prediction capability.

  12. High-resolution 3-D P-wave tomographic imaging of the shallow magmatic system of Erebus volcano, Antarctica

    NASA Astrophysics Data System (ADS)

    Zandomeneghi, D.; Aster, R. C.; Barclay, A. H.; Chaput, J. A.; Kyle, P. R.

    2011-12-01

    Erebus volcano (Ross Island), the most active volcano in Antarctica, is characterized by a persistent phonolitic lava lake at its summit and a wide range of seismic signals associated with its underlying long-lived magmatic system. The magmatic structure in a 3 by 3 km area around the summit has been imaged using high-quality data from a seismic tomographic experiment carried out during the 2008-2009 austral field season (Zandomeneghi et al., 2010). An array of 78 short period, 14 broadband, and 4 permanent Mount Erebus Volcano Observatory seismic stations and a program of 12 shots were used to model the velocity structure in the uppermost kilometer over the volcano conduit. P-wave travel times were inverted for the 3-D velocity structure using the shortest-time ray tracing (50-m grid spacing) and LSQR inversion (100-m node spacing) of a tomography code (Toomey et al., 1994) that allows for the inclusion of topography. Regularization is controlled by damping and smoothing weights and smoothing lengths, and addresses complications that are inherent in a strongly heterogeneous medium featuring rough topography and a dense parameterization and distribution of receivers/sources. The tomography reveals a composite distribution of very high and low P-wave velocity anomalies (i.e., exceeding 20% in some regions), indicating a complex sub-lava-lake magmatic geometry immediately beneath the summit region and in surrounding areas, as well as the presence of significant high velocity shallow regions. The strongest and broadest low velocity zone is located W-NW of the crater rim, indicating the presence of an off-axis shallow magma body. This feature spatially corresponds to the inferred centroid source of VLP signals associated with Strombolian eruptions and lava lake refill (Aster et al., 2008). Other resolved structures correlate with the Side Crater and with lineaments of ice cave thermal anomalies extending NE and SW of the rim. High velocities in the summit area possibly constitute the seismic image of an older caldera, solidified intrusions or massive lava flows. REFERENCES: Aster et al., (2008) Moment tensor inversion of very long period seismic signals from Strombolian eruptions of Erebus volcano. J. Volcanol. Geotherm. Res., 177, 635-647. Toomey et al., (1994), Tomographic imaging of the shallow crustal structure of the East Pacific Rise at 9°30'N. J. Geophys. Res., 99 (B12), 24,135-24,157. Zandomeneghi et al., (2010), Seismic Tomography of Erebus Volcano, Antarctica, Eos, 91, 6, 53-55.

  13. Analytic semigroups: Applications to inverse problems for flexible structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rebnord, D. A.

    1990-01-01

    Convergence and stability results for least squares inverse problems involving systems described by analytic semigroups are presented. The practical importance of these results is demonstrated by application to several examples from problems of estimation of material parameters in flexible structures using accelerometer data.

  14. Structure of the European upper mantle revealed by adjoint tomography

    NASA Astrophysics Data System (ADS)

    Zhu, Hejun; Bozdağ, Ebru; Peter, Daniel; Tromp, Jeroen

    2012-07-01

    Images of the European crust and upper mantle, created using seismic tomography, identify the Cenozoic Rift System and related volcanism in central and western Europe. They also reveal subduction and slab roll back in the Mediterranean-Carpathian region. However, existing tomographic models are either high in resolution, but cover only a limited area, or low in resolution, and thus miss the finer-scale details of mantle structure. Here we simultaneously fit frequency-dependent phase anomalies of body and surface waveforms in complete three-component seismograms with an iterative inversion strategy involving adjoint methods, to create a tomographic model of the European upper mantle. We find that many of the smaller-scale structures such as slabs, upwellings and delaminations that emerge naturally in our model are consistent with existing images. However, we also derive some hitherto unidentified structures. Specifically, we interpret fast seismic-wave speeds beneath the Dinarides Mountains, southern Europe, as a signature of northeastward subduction of the Adria plate; slow seismic-wave speeds beneath the northern part of the Rhine Graben as a reservoir connected to the Eifel hotspot; and fast wave-speed anomalies beneath Scandinavia as a lithospheric drip, where the lithosphere is delaminating and breaking away. Our model sheds new light on the enigmatic palaeotectonic history of Europe.

  15. On a gas electron multiplier based synthetic diagnostic for soft x-ray tomography on WEST with focus on impurity transport studies

    NASA Astrophysics Data System (ADS)

    Jardin, A.; Mazon, D.; Malard, P.; O'Mullane, M.; Chernyshova, M.; Czarski, T.; Malinowski, K.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.

    2017-08-01

    The tokamak WEST aims at testing ITER divertor high heat flux component technology in long pulse operation. Unfortunately, heavy impurities like tungsten (W) sputtered from the plasma facing components can pollute the plasma core by radiation cooling in the soft x-ray (SXR) range, which is detrimental for the energy confinement and plasma stability. SXR diagnostics give valuable information to monitor impurities and study their transport. The WEST SXR diagnostic is composed of two new cameras based on the Gas Electron Multiplier (GEM) technology. The WEST GEM cameras will be used for impurity transport studies by performing 2D tomographic reconstructions with spectral resolution in tunable energy bands. In this paper, we characterize the GEM spectral response and investigate W density reconstruction thanks to a synthetic diagnostic recently developed and coupled with a tomography algorithm based on the minimum Fisher information (MFI) inversion method. The synthetic diagnostic includes the SXR source from a given plasma scenario, the photoionization, electron cloud transport and avalanche in the detection volume using Magboltz, and tomographic reconstruction of the radiation from the GEM signal. Preliminary studies of the effect of transport on the W ionization equilibrium and on the reconstruction capabilities are also presented.

  16. Ocean wavenumber estimation from wave-resolving time series imagery

    USGS Publications Warehouse

    Plant, N.G.; Holland, K.T.; Haller, M.C.

    2008-01-01

    We review several approaches that have been used to estimate ocean surface gravity wavenumbers from wave-resolving remotely sensed image sequences. Two fundamentally different approaches that utilize these data exist. A power spectral density approach identifies wavenumbers where image intensity variance is maximized. Alternatively, a cross-spectral correlation approach identifies wavenumbers where intensity coherence is maximized. We develop a solution to the latter approach based on a tomographic analysis that utilizes a nonlinear inverse method. The solution is tolerant to noise and other forms of sampling deficiency and can be applied to arbitrary sampling patterns, as well as to full-frame imagery. The solution includes error predictions that can be used for data retrieval quality control and for evaluating sample designs. A quantitative analysis of the intrinsic resolution of the method indicates that the cross-spectral correlation fitting improves resolution by a factor of about ten times as compared to the power spectral density fitting approach. The resolution analysis also provides a rule of thumb for nearshore bathymetry retrievals-short-scale cross-shore patterns may be resolved if they are about ten times longer than the average water depth over the pattern. This guidance can be applied to sample design to constrain both the sensor array (image resolution) and the analysis array (tomographic resolution). ?? 2008 IEEE.

  17. Confirmation of a change in the global shear velocity pattern at around 1000 km depth

    NASA Astrophysics Data System (ADS)

    Durand, S.; Debayle, E.; Ricard, Y.; Zaroli, C.; Lambotte, S.

    2017-12-01

    In this study, we confirm the existence of a change in the shear velocity spectrum around 1000 km depth based on a new shear velocity tomographic model of the Earth's mantle, SEISGLOB2. This model is based on Rayleigh surface wave phase velocities, self- and cross-coupling structure coefficients of spheroidal normal modes and body wave traveltimes which are, for the first time, combined in a tomographic inversion. SEISGLOB2 is developed up to spherical harmonic degree 40 and in 21 radial spline functions. The spectrum of SEISGLOB2 is the flattest (i.e. richest in 'short' wavelengths corresponding to spherical harmonic degrees greater than 10) around 1000 km depth and this flattening occurs between 670 and 1500 km depth. We also confirm various changes in the continuity of slabs and mantle plumes all around 1000 km depth where we also observed the upper boundary of Large Low Shear Velocity Provinces. The existence of a flatter spectrum, richer in short-wavelength heterogeneities, in a region of the mid-mantle can have great impacts on our understanding of the mantle dynamics and should thus be better understood in the future. Although a viscosity increase, a phase change or a compositional change can all concur to induce this change of pattern, its precise origin is still very uncertain.

  18. Combined interpretation of radar, hydraulic, and tracer data from a fractured-rock aquifer near Mirror Lake, New Hampshire, USA

    USGS Publications Warehouse

    Day-Lewis, F. D.; Lane, J.W.; Gorelick, S.M.

    2006-01-01

    An integrated interpretation of field experimental cross-hole radar, tracer, and hydraulic data demonstrates the value of combining time-lapse geophysical monitoring with conventional hydrologic measurements for improved characterization of a fractured-rock aquifer. Time-lapse difference-attenuation radar tomography was conducted during saline tracer experiments at the US Geological Survey Fractured Rock Hydrology Research Site near Mirror Lake, Grafton County, New Hampshire, USA. The presence of electrically conductive saline tracer effectively illuminates permeable fractures or pathways for geophysical imaging. The geophysical results guide the construction of three-dimensional numerical models of ground-water flow and solute transport. In an effort to explore alternative explanations for the tracer and tomographic data, a suite of conceptual models involving heterogeneous hydraulic conductivity fields and rate-limited mass transfer are considered. Calibration data include tracer concentrations, the arrival time of peak concentration at the outlet, and steady-state hydraulic head. Results from the coupled inversion procedure suggest that much of the tracer mass migrated outside the three tomographic image planes, and that solute is likely transported by two pathways through the system. This work provides basic and site-specific insights into the control of permeability heterogeneity on ground-water flow and solute transport in fractured rock. ?? Springer-Verlag 2004.

  19. Real time measurement of transient event emissions of air toxics by tomographic remote sensing in tandem with mobile monitoring

    NASA Astrophysics Data System (ADS)

    Olaguer, Eduardo P.; Stutz, Jochen; Erickson, Matthew H.; Hurlock, Stephen C.; Cheung, Ross; Tsai, Catalina; Colosimo, Santo F.; Festa, James; Wijesinghe, Asanga; Neish, Bradley S.

    2017-02-01

    During the Benzene and other Toxics Exposure (BEE-TEX) study, a remote sensing network based on long path Differential Optical Absorption Spectroscopy (DOAS) was set up in the Manchester neighborhood beside the Ship Channel of Houston, Texas in order to perform Computer Aided Tomography (CAT) scans of hazardous air pollutants. On 18-19 February 2015, the CAT scan network detected large nocturnal plumes of toluene and xylenes most likely associated with railcar loading and unloading operations at Ship Channel petrochemical facilities. The presence of such plumes during railcar operations was confirmed by a mobile laboratory equipped with a Proton Transfer Reaction-Mass Spectrometer (PTR-MS), which measured transient peaks of toluene and C2-benzenes of 50 ppb and 57 ppb respectively around 4 a.m. LST on 19 February 2015. Plume reconstruction and source attribution were performed using the 4D variational data assimilation technique and a 3D micro-scale forward and adjoint air quality model based on both tomographic and PTR-MS data. Inverse model estimates of fugitive emissions associated with railcar transfer emissions ranged from 2.0 to 8.2 kg/hr for toluene and from 2.2 to 3.5 kg/hr for xylenes in the early morning of 19 February 2015.

  20. Tomographic Imaging of the Peru Subduction Zone beneath the Altiplano and Implications for Andean Tectonics

    NASA Astrophysics Data System (ADS)

    Davis, P. M.; Foote, E. J.; Stubailo, I.; Phillips, K. E.; Clayton, R. W.; Skinner, S.; Audin, L.; Tavera, H.; Dominguez Ramirez, L. A.; Lukac, M. L.

    2010-12-01

    This work describes preliminary tomography results from the Peru Seismic Experiment (PERUSE) a 100 station broadband seismic network installed in Peru. The network consists a linear array of broadband seismic stations that was installed mid-2008 that runs from the Peruvian coast near Mollendo to Lake Titicaca. A second line was added in late 2009 between Lake Titicaca and Cusco. Teleseismic and local earthquake travel time residuals are being combined in the tomographic inversions. The crust under the Andes is found to be 70-80 km thick decreasing to 30 km near the coast. The morphology of the Moho is consistent with the receiver function images (Phillips et al., 2010; this meeting) and also gravity. Ray tracing through the heterogeneous structure is used to locate earthquakes. However the rapid spatial variation in crustal thickness, possibly some of the most rapid in the world, generates shadow zones when using conventional ray tracing for the tomography. We use asymptotic ray theory that approximates effects from finite frequency kernels to model diffracted waves in these regions. The observation of thickened crust suggests that models that attribute the recent acceleration of the Altiplano uplift to crustal delamination are less likely than those that attribute it to crustal compression.

  1. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  2. A gradient based algorithm to solve inverse plane bimodular problems of identification

    NASA Astrophysics Data System (ADS)

    Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing

    2018-02-01

    This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.

  3. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  4. The Inverse Problem in Jet Acoustics

    NASA Technical Reports Server (NTRS)

    Wooddruff, S. L.; Hussaini, M. Y.

    2001-01-01

    The inverse problem for jet acoustics, or the determination of noise sources from far-field pressure information, is proposed as a tool for understanding the generation of noise by turbulence and for the improved prediction of jet noise. An idealized version of the problem is investigated first to establish the extent to which information about the noise sources may be determined from far-field pressure data and to determine how a well-posed inverse problem may be set up. Then a version of the industry-standard MGB code is used to predict a jet noise source spectrum from experimental noise data.

  5. Inverse kinematics problem in robotics using neural networks

    NASA Technical Reports Server (NTRS)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  6. Bayesian Inference in Satellite Gravity Inversion

    NASA Technical Reports Server (NTRS)

    Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.

    2005-01-01

    To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.

  7. A Vertical Differential Configuration in GPR prospecting

    NASA Astrophysics Data System (ADS)

    Persico, Raffaele; Pochanin, Gennadiy; Varianytsia-Roshchupkina, Liudmyla; Catapano, Ilaria; Gennarelli, Gianluca; Soldovieri, Francesco

    2015-04-01

    The rejection of the direct coupling between the antennas is an issue of interest in several GPR applications, especially when it is important to distinguish the targets of interest from the clutter and the signal reflected from the air soil interface. Therefore, in this framework several hardware and software strategies have been proposed. Among the software strategies, probably the most common one is the background removal [1], whereas as an hardware strategy the differential configuration has been introduced in [2-3] and then further on studied in [4] with respect to the spatial filtering properties of the relevant mathematical operator. In particular, the studies proposed in [1] and [4] have shown that, in general, all the strategies for the rejection of the direct coupling have necessarily some drawback, essentially because it is not possible to erase all and only the undesired contributions leaving "untouched" the contributions of the targets of interest to the gathered signal. With specific regard to the differential configuration, in [2-3], the differential configuration consisted in a couple of receiving antennas symmetrically placed around the transmitting one, being the three antennas placed along the same horizontal segment. Therefore, we might define that configuration as a "horizontal differential configuration". Here, we propose a novel differential GPR configuration, where the two receiving antennas are still symmetrically located with respect to the transmitting one, but are placed piled on each other at different heights from the air-soil interface, whereas the transmitting antenna is at the medium height between the two receiving one (however, it is not at the same abscissa but at a fixed horizontal offset from the receiving antennas). Such a differential configuration has been previously presented in [5-6] and allows a good isolation between the antennas, while preserving the possibility to collect backscattered signals from both electrically small objects and interfaces. This configuration can be labeled as a vertical differential configuration. At the conference, the reconstruction capabilities of this differential GPR configuration system will be discussed by means of an analysis of the problem based on a properly designed microwave tomographic inversion approach. The proposed approach exploits the Born approximation and faces the imaging as the solution of a linear inverse scattering problem. In this way, the problem of the local minima is avoided [7] and it is possible to impose some regularization to the problem in an easy way problem [8-9]. At the conference, a theoretical analysis of the mathematical propserties of the scattering operator under the vertical differential configuration will be presented showing that, with respect to the horizontal differential configuration, the vertical one allows to reject the direct coupling between the antennas but not the coupling of the antennas occurring through the air-soil interface. On the other hand, the filtering properties of the operator at hand con be considered, let say, less severe in some cases. At the conference, both some numerical and experimental results will be shown. References [1] R. Persico, F. Soldovieri, "Effects of the background removal in linear inverse scattering", IEEE Trans. Geosci. Remote Sens, vol. 46, pp. 1104-1114, April 2008. [2] L. Gurel, U. Oguz, "Three-Dimensional FDTD modeling of a ground penetrating radar", IEEE Trans. Geosci. Remote Sens, vol. 38, pp. 1513-1521, July 2000. [3] L. Gurel, U. Oguz, "Optimization of the transmitter-receiver separation in the ground penetrating radar", IEEE Trans. Antennas and Propag., vol. 51, no 3, pp. 362-370, March 2003. [4] R. Persico, F. Soldovieri, "A microwave tomography approach for a differential configuration in GPR prospecting", IEEE Trans. Antennas and Propag., vol. 54, pp. 3541 - 3548, 2006. [5] Y.A. Kopylov, S.A. Masalov, G.P. Pochanin, "The way of isolation between transmitting and receiving modules of antenna", Patent 81652 Ukraine: IPC (2006) H01Q 9/00 H01Q 19/10 / publ. 25.01.08, Bull. N. 2 [6] G.P. Pochanin, "Some Advances in UWB GPR," in "Unexploded Ordnance Detection and Mitigation, - NATO Science for Peace and Security Series -B: Physics and Biophysics - Ed. by Jim Byrnes, Springer: Dordrecht, (The Nederland), 2009. pp.223-233. [7] R. Persico, F. Soldovieri, R. Pierri, "Convergence Properties of a Quadratic Approach to the Inverse Scattering Problem", Journal of Optical Society of America Part A, vol. 19, n. 12, pp. 2424-2428, December 2002. [8] R. Pierri, G. Leone, F. Soldovieri, R. Persico, "Electromagnetic inversion for subsurface applications under the distorted Born approximation" Nuovo Cimento, vol. 24C, N. 2, pp 245-261, March-April 2001. [9] R. Persico, Introduction to ground penetrating Radar: Inverse Scattering and data Processing, in print on Wiley and Sons, 2014, ISBN 9781118305003.

  8. EDITORIAL: Inverse Problems in Engineering

    NASA Astrophysics Data System (ADS)

    West, Robert M.; Lesnic, Daniel

    2007-01-01

    Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.

  9. Inverse problem for multispecies ferromagneticlike mean-field models in phase space with many states

    NASA Astrophysics Data System (ADS)

    Fedele, Micaela; Vernia, Cecilia

    2017-10-01

    In this paper we solve the inverse problem for the Curie-Weiss model and its multispecies version when multiple thermodynamic states are present as in the low temperature phase where the phase space is clustered. The inverse problem consists of reconstructing the model parameters starting from configuration data generated according to the distribution of the model. We demonstrate that, without taking into account the presence of many states, the application of the inversion procedure produces very poor inference results. To overcome this problem, we use the clustering algorithm. When the system has two symmetric states of positive and negative magnetizations, the parameter reconstruction can also be obtained with smaller computational effort simply by flipping the sign of the magnetizations from positive to negative (or vice versa). The parameter reconstruction fails when the system undergoes a phase transition: In that case we give the correct inversion formulas for the Curie-Weiss model and we show that they can be used to measure how close the system gets to being critical.

  10. Research on inverse, hybrid and optimization problems in engineering sciences with emphasis on turbomachine aerodynamics: Review of Chinese advances

    NASA Technical Reports Server (NTRS)

    Liu, Gao-Lian

    1991-01-01

    Advances in inverse design and optimization theory in engineering fields in China are presented. Two original approaches, the image-space approach and the variational approach, are discussed in terms of turbomachine aerodynamic inverse design. Other areas of research in turbomachine aerodynamic inverse design include the improved mean-streamline (stream surface) method and optimization theory based on optimal control. Among the additional engineering fields discussed are the following: the inverse problem of heat conduction, free-surface flow, variational cogeneration of optimal grid and flow field, and optimal meshing theory of gears.

  11. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  12. 3-D Seismic Tomographic Inversion to Image Segmentation of the Sumatra Subduction Zone near Simeulue Island

    NASA Astrophysics Data System (ADS)

    Tang, G.; Barton, P. J.; Dean, S. M.; Vermeesch, P. M.; Jusuf, M. D.; Henstock, T.; Djajadihardja, Y.; McNeill, L. C.; Permana, H.

    2009-12-01

    Oceanic subduction along the Sunda trench to the west of Sumatra (Indonesia) shows significant along-strike variations in seismicity. For example, the great M9.3 earthquake in 2004 occurred in the forearc basin north of Simeulue island, rupturing the fault predominantly towards the northwest, while the 2005 Nias earthquake nucleated near the Banyak islands, rupturing towards the southeast (Ammon et al., 2005; Ishii et al. 2005). The gap between these two active areas indicates segmentation of the subduction zone, but the cause of the segmentation remains enigmatic. To investigate the apparent barriers to rupture, two 3-D refraction surveys were conducted in 2008, one, the topic of this study, around Simeulue island and the other to the southeast of Nias island. Seismic data were collected using ocean bottom seismometers and a 12-airgun tuned array with a total capacity of 5420 cu. in., together with high resolution bathymetry data and gravity data. 174,515 traveltimes of first refracted arrivals were picked for the study area, and 128,138 of them were inverted for a model of minimum structure required by the data using the ‘FAST’ method (Zelt et.al, 1998). Resolution tests show that the model is resolvable mostly on a scale of >40 km horizontally. The final velocity model produced has two distinct features: i. the subducted oceanic plates (represented by 6 km/s contours) seem to be discontinuous along strike; ii. the subduction dip angle appears to be steeper in the southern part of the survey area than in the north. The geometric variation in the subducted plate appears to coincide with the segment boundary approximately across the centre of Simeulue island, and may perhaps associated with the segmentation of the seismicity noted from the earthquake record. More accurate velocity models will be developed by jointly inverting traveltimes of first and later arrivals as well as normal incidence data using the tomographic inversion program JIVE-3D (Hobro et.al, 2003). Some passive earthquake data may also be available for the inversion for this area. These new results will provide insights into along-strike variations in subsurface structure and/or physical properties within the Sumatra subduction zone, which maybe related to the observed segmentation.

  13. Clustering and interpretation of local earthquake tomography models in the southern Dead Sea basin

    NASA Astrophysics Data System (ADS)

    Bauer, Klaus; Braeuer, Benjamin

    2016-04-01

    The Dead Sea transform (DST) marks the boundary between the Arabian and the African plates. Ongoing left-lateral relative plate motion and strike-slip deformation started in the Early Miocene (20 MA) and produced a total shift of 107 km until presence. The Dead Sea basin (DSB) located in the central part of the DST is one of the largest pull-apart basins in the world. It was formed from step-over of different fault strands at a major segment boundary of the transform fault system. The basin development was accompanied by deposition of clastics and evaporites and subsequent salt diapirism. Ongoing deformation within the basin and activity of the boundary faults are indicated by increased seismicity. The internal architecture of the DSB and the crustal structure around the DST were subject of several large scientific projects carried out since 2000. Here we report on a local earthquake tomography study from the southern DSB. In 2006-2008, a dense seismic network consisting of 65 stations was operated for 18 months in the southern part of the DSB and surrounding regions. Altogether 530 well-constrained seismic events with 13,970 P- and 12,760 S-wave arrival times were used for a travel time inversion for Vp, Vp/Vs velocity structure and seismicity distribution. The work flow included 1D inversion, 2.5D and 3D tomography, and resolution analysis. We demonstrate a possible strategy how several tomographic models such as Vp, Vs and Vp/Vs can be integrated for a combined lithological interpretation. We analyzed the tomographic models derived by 2.5D inversion using neural network clustering techniques. The method allows us to identify major lithologies by their petrophysical signatures. Remapping the clusters into the subsurface reveals the distribution of basin sediments, prebasin sedimentary rocks, and crystalline basement. The DSB shows an asymmetric structure with thickness variation from 5 km in the west to 13 km in the east. Most importantly, a well-defined body under the eastern part of the basin down to 18 km depth was identified by the algorithm. Considering its geometry and petrophysical signature, this unit is interpreted as prebasin sediments and not as crystalline basement. The seismicity distribution supports our results, where events are concentrated along boundaries of the basin and the deep prebasin sedimentary body.

  14. Shear Wave Velocity Structure beneath the African-Anatolian Subduction Zone in Southwestern Turkey from Inversions of Rayleigh Waves

    NASA Astrophysics Data System (ADS)

    Teoman, U. M.; Sandvol, E. A.; Kahraman, M.; Sahin, S.; Turkelli, N.

    2011-12-01

    The ongoing subduction of the African Plate under western Anatolia results in a highly complex tectonic structure especially beneath Isparta Angle (IA) and the surroundings where the Hellenic and Cyprian slabs with different subduction geometries intersect. The primary objective is to accurately image the lithospheric structure at this convergent plate boundary and further understand the reasons responsible for the active deformation. Data was gathered from a temporary seismic network consisting of 10 broadband stations that was installed in August 2006 with the support from University of Missouri and nine more stations deployed in March 2007 with the support from Bogazici Research Fund (project ID:07T203). In addition, 21 permanent stations of Kandilli Observatory and Earthquake Research Institute (KOERI) and two from Süleyman Demirel University (SDU) together with five stations from IRIS/Geofon Network were also included to extend the station coverage. We used earthquakes in a distance range of 30-120 degrees with body wave magnitudes larger than 5.5. Depending on the signal to noise ratio, azimuthal coverage of events, and coherence from station, 81 events provided high-quality data for our analysis. The distribution of events shows a good azimuthal coverage, which is important for resolving both lateral heterogeneity and azimuthal anisotropy. We adopted a two-plane-wave inversion technique of Forsyth and Li (2003) to simultaneously solve for the incoming wave field and phase velocity. This relatively simpler representation of a more complex wavefield provided quite stable patterns of amplitude variations in many cases. To begin with, an average phase velocity dispersion curve was obtained and used as an input for tomographic inversions. Two-dimensional tomographic maps of isotropic and azimuthally anisotropic phase velocity variations were generated. Phase velocities can only tell us integrated information about the upper mantle. Furthermore, we inverted phase velocities for shear wave velocities (Saito,1988) in order to obtain direct information at a depth range of 30-300 km that can be interpreted in terms of major tectonic processes such as extension, slab detachment/tearing, STEP faults, volcanism, temperature anomalies, the presence of melt or dissolved water, etc. Resulting tomograms along horizontal and vertical depth sections provided valuable insights on the crustal and upper mantle structure beneath Southwestern Turkey down to almost 300 km.

  15. Computational structures for robotic computations

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chang, P. R.

    1987-01-01

    The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.

  16. The Sensitivity of Joint Inversions of Seismic and Geodynamic Data to Mantle Viscosity

    NASA Astrophysics Data System (ADS)

    Lu, C.; Grand, S. P.; Forte, A. M.; Simmons, N. A.

    2017-12-01

    Seismic tomography has mapped the existence of large scale mantle heterogeneities in recent years. However, the origin of these velocity anomalies in terms of chemical and thermal variations is still under debate due to the limitations of tomography. Joint inversion of seismic, geodynamic, and mineral physics observations has proven to be a powerful tool to decouple thermal and chemical effects in the deep mantle (Simmons et al. 2010). The approach initially attempts to find a model that can be explained assuming temperature controls lateral variations in mantle properties and then to consider more complicated lateral variations that account for the presence of chemical heterogeneity to further fit data. The geodynamic observations include Earth's free air gravity field, tectonic plate motions, dynamic topography and the excess ellipticity of the core. The sensitivity of the geodynamic observables to density anomalies, however, depends on an assumed radial mantle viscosity profile. Here we perform joint inversions of seismic and geodynamic data using a number of published viscosity profiles. The goal is to test the sensitivity of joint inversion results to mantle viscosity. For each viscosity model, geodynamic sensitivity kernels are calculated and used to jointly invert the geodynamic observations as well as a new shear wave data set for a model of density and seismic velocity. Also, compared with previous joint inversion studies, two major improvements have been made in our inversion. First, we use a nonlinear inversion to account for anelastic effects. Applying the very fast simulate annealing (VFSA) method, we let the elastic scaling factor and anelastic parameters from mineral physics measurements vary within their possible ranges and find the best fitting model assuming thermal variations are the cause of the heterogeneity. We also include an a priori subducting slab model into the starting model. Thus the geodynamic and seismic signatures of short wavelength subducting slabs are better accounted for in the inversions. Reference: Simmons, N. A., A. M. Forte, L. Boschi, and S. P. Grand (2010), GyPSuM: A joint tomographic model of mantle density and seismic wave speeds, Journal of Geophysical Research: Solid Earth, 115(B12), B12310

  17. A Toolkit for Forward/Inverse Problems in Electrocardiography within the SCIRun Problem Solving Environment

    PubMed Central

    Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrell J; Wang, Dafang F; Steffen, Michael; Brooks, Dana H; van Dam, Peter M; Macleod, Rob S

    2012-01-01

    Computational modeling in electrocardiography often requires the examination of cardiac forward and inverse problems in order to non-invasively analyze physiological events that are otherwise inaccessible or unethical to explore. The study of these models can be performed in the open-source SCIRun problem solving environment developed at the Center for Integrative Biomedical Computing (CIBC). A new toolkit within SCIRun provides researchers with essential frameworks for constructing and manipulating electrocardiographic forward and inverse models in a highly efficient and interactive way. The toolkit contains sample networks, tutorials and documentation which direct users through SCIRun-specific approaches in the assembly and execution of these specific problems. PMID:22254301

  18. Accretionary nature of the crust of Central and East Java (Indonesia) revealed by local earthquake travel-time tomography

    NASA Astrophysics Data System (ADS)

    Haberland, Christian; Bohm, Mirjam; Asch, Günter

    2014-12-01

    Reassessment of travel time data from an exceptionally dense, amphibious, temporary seismic network on- and offshore Central and Eastern Java (MERAMEX) confirms the accretionary nature of the crust in this segment of the Sunda subduction zone (109.5-111.5E). Traveltime data of P- and S-waves of 244 local earthquakes were tomographically inverted, following a staggered inversion approach. The resolution of the inversion was inspected by utilizing synthetic recovery tests and analyzing the model resolution matrix. The resulting images show a highly asymmetrical crustal structure. The images can be interpreted to show a continental fragment of presumably Gondwana origin in the coastal area (east of 110E), which has been accreted to the Sundaland margin. An interlaced anomaly of high seismic velocities indicating mafic material can be interpreted to be the mantle part of the continental fragment, or part of obducted oceanic lithosphere. Lower than average crustal velocities of the Java crust are likely to reflect ophiolitic and metamorphic rocks of a subduction melange.

  19. Angle-domain common imaging gather extraction via Kirchhoff prestack depth migration based on a traveltime table in transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Shaoyong; Gu, Hanming; Tang, Yongjie; Bingkai, Han; Wang, Huazhong; Liu, Dingjin

    2018-04-01

    Angle-domain common image-point gathers (ADCIGs) can alleviate the limitations of common image-point gathers in an offset domain, and have been widely used for velocity inversion and amplitude variation with angle (AVA) analysis. We propose an effective algorithm for generating ADCIGs in transversely isotropic (TI) media based on the gradient of traveltime by Kirchhoff pre-stack depth migration (KPSDM), as the dynamic programming method for computing the traveltime in TI media would not suffer from the limitation of shadow zones and traveltime interpolation. Meanwhile, we present a specific implementation strategy for ADCIG extraction via KPSDM. Three major steps are included in the presented strategy: (1) traveltime computation using a dynamic programming approach in TI media; (2) slowness vector calculation by the gradient of a traveltime table calculated previously; (3) construction of illumination vectors and subsurface angles in the migration process. Numerical examples are included to demonstrate the effectiveness of our approach, which henceforce shows its potential application for subsequent tomographic velocity inversion and AVA.

  20. Characterization of Material Properties at Brady Hot Springs, Nevada by Inverse Modeling of Data from Seismology, Geodesy, and Hydrology

    NASA Astrophysics Data System (ADS)

    Wang, H. F.; Feigl, K. L.; Patterson, J.; Parker, L.; Reinisch, E. C.; Zeng, X.; Cardiff, M. A.; Fratta, D.; Lord, N. E.; Thurber, C. H.; Robertson, M.; Miller, D. E.; Akerley, J.; Kreemer, C.; Morency, C.; Davatzes, N. C.

    2017-12-01

    The PoroTomo project consists of poroelastic tomography by adjoint inverse modeling of data from seismology, geodesy, and hydrology. The goal of the PoroTomo project is to assess an integrated technology for characterizing and monitoring changes in the rock mechanical properties of an enhanced geothermal system in 3 dimensions with a spatial resolution better than 50 meters. In March 2016, we deployed the integrated technology in a 1500-by-500-by-400-meter volume at Brady. The 15-day deployment included 4 distinct time intervals with intentional manipulations of the pumping rates in injection and production wells. The data set includes: active seismic sources, fiber-optic cables for Distributed Acoustic Sensing (DAS) and Distributed Temperature Sensing (DTS) arranged vertically in a borehole to 400 m depth and horizontally in a trench 8700 m in length and 0.5 m in depth; 244 seismometers on the surface, 3 pressure sensors in observation wells, continuous geodetic measurements at 3 GPS stations, and 7 InSAR acquisitions. To account for the mechanical behavior of both the rock and the fluids, we are developing numerical models for the 3-D distribution of the material properties. We present an overview of results, including:Tomographic images of P-wave velocity estimated from seismic body waves [Thurber et al., this meeting].Tomographic images of phase velocity estimated from ambient noise correlation functions [Zeng et al., this meeting].Models of volumetric contraction to account for subsidence observed by InSAR and GPS [Reinisch et al., this meeting].Interpretation of pressure and temperature data [Patterson et al., this meeting].Taken together, these results support a conceptual model of highly permeable conduits along faults channeling fluids from shallow aquifers to the deep geothermal reservoir tapped by the production wells. The PoroTomo project is funded by a grant from the U.S. Department of Energy.

  1. Constraint on the magma sources in Luzon Island Philippines by using P and S wave local seismic tomography

    NASA Astrophysics Data System (ADS)

    Nghia, N. C.; Huang, B. S.; Chen, P. F.

    2017-12-01

    The subduction of South China Sea beneath the Luzon Island has caused a complex setting of seismicity and magmatism because of the proposed ridge subduction and slab tearing. To constrain the validity of slab tearing induced by ridge subduction and their effect, we performed a P and S wave seismic tomography travel time inversion using LOTOS code. The dataset has been retrieved from International Seismological Centre from 1960 to 2008. A 1D velocity inverted by using VELEST with a Vp/Vs ratio of 1.74 is used as the starting input velocity for tomographic inversion. Total of 20905 P readings and 8126 S readings from 2355 earthquakes events were used to invert for velocity structure beneath Luzon Island. The horizontal tomographic results show low-velocity, high Vp/Vs regions at the shallow depth less than 50 km which are interpreted as the magmatic chambers of the volcanic system in Luzon. At the suspected region of slab tearing at 16oN to 18oN, two sources of magma have been indentified: slab window magma at shallow depth (< 50 km) and magma induced by mantle wedge partial melting from higher depth. This slab melting may have changed the composition of magmatic to become more silicic with high viscosity, which explains the volcanic gap in this region. At the region of 14oN to 15oN, large magma chambers under active volcanos are identified which explain the active volcanism in this region. Contrast to the region of slab tearing, in this region, the magma chambers are fed by only magma from partial melting of mantle wedge from the depth higher than 100 km. These observations are consistent with previous work on the slab tearing of South China Sea and the activities of volcanism in the Luzon Island.

  2. Combined experimental-numerical identification of radiative transfer coefficients in white LED phosphor layers

    NASA Astrophysics Data System (ADS)

    Akolkar, A.; Petrasch, J.; Finck, S.; Rahmatian, N.

    2018-02-01

    An inverse analysis of the phosphor layer of a commercially available, conformally coated, white LED is done based on tomographic and spectrometric measurements. The aim is to determine the radiative transfer coefficients of the phosphor layer from the measurements of the finished device, with minimal assumptions regarding the composition of the phosphor layer. These results can be used for subsequent opto-thermal modelling and optimization of the device. For this purpose, multiple integrating sphere and gonioradiometric measurements are done to obtain statistical bounds on spectral radiometric values and angular color distributions for ten LEDs belonging to the same color bin of the product series. Tomographic measurements of the LED package are used to generate a tetrahedral grid of the 3D LED geometry. A radiative transfer model using Monte Carlo Ray Tracing in the tetrahedral grid is developed. Using a two-wavelength model consisting of a blue emission wavelength and a yellow, Stokes-shifted re-emission wavelength, the angular color distribution of the LED is simulated over wide ranges of the absorption and scattering coefficients of the phosphor layer, for the blue and yellow wavelengths. Using a two-step, iterative space search, combinations of the radiative transfer coefficients are obtained for which the simulations are consistent with the integrating sphere and gonioradiometric measurements. The results show an inverse relationship between the scattering and absorption coefficients of the phosphor layer for blue light. Scattering of yellow light acts as a distribution and loss mechanism for yellow light and affects the shape of the angular color distribution significantly, especially at larger viewing angles. The spread of feasible coefficients indicates that measured optical behavior of the LEDs may be reproduced using a range of combinations of radiative coefficients. Given that coefficients predicted by the Mie theory usually must be corrected in order to reproduce experimental results, these results indicate that a more complete model of radiative transfer in phosphor layers is required.

  3. Joint Geophysical Inversion With Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelievre, P. G.; Bijani, R.; Farquharson, C. G.

    2015-12-01

    Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.

  4. An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.

  5. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  6. Inverse random source scattering for the Helmholtz equation in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Li, Ming; Chen, Chuchu; Li, Peijun

    2018-01-01

    This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  7. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    NASA Astrophysics Data System (ADS)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.

  8. PREFACE: First International Congress of the International Association of Inverse Problems (IPIA): Applied Inverse Problems 2007: Theoretical and Computational Aspects

    NASA Astrophysics Data System (ADS)

    Uhlmann, Gunther

    2008-07-01

    This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology, Finland), Masahiro Yamamoto (University of Tokyo, Japan), Gunther Uhlmann (University of Washington) and Jun Zou (Chinese University of Hong Kong). IPIA is a recently formed organization that intends to promote the field of inverse problem at all levels. See http://www.inverse-problems.net/. IPIA awarded the first Calderón prize at the opening of the conference to Matti Lassas (see first article in the Proceedings). There was also a general meeting of IPIA during the workshop. This was probably the largest conference ever on IP with 350 registered participants. The program consisted of 18 invited speakers and the Calderón Prize Lecture given by Matti Lassas. Another integral part of the program was the more than 60 mini-symposia that covered a broad spectrum of the theory and applications of inverse problems, focusing on recent developments in medical imaging, seismic exploration, remote sensing, industrial applications, numerical and regularization methods in inverse problems. Another important related topic was image processing in particular the advances which have allowed for significant enhancement of widely used imaging techniques. For more details on the program see the web page: http://www.pims.math.ca/science/2007/07aip. These proceedings reflect the broad spectrum of topics covered in AIP 2007. The conference and these proceedings would not have happened without the contributions of many people. I thank all my fellow organizers, the invited speakers, the speakers and organizers of mini-symposia for making this an exciting and vibrant event. I also thank PIMS, NSF and MITACS for their generous financial support. I take this opportunity to thank the PIMS staff, particularly Ken Leung, for making the local arrangements. Also thanks are due to Stephen McDowall for his help in preparing the schedule of the conference and Xiaosheng Li for the help in preparing these proceedings. I also would like to thank the contributors of this volume and the referees. Finally, many thanks are due to Graham Douglas and Elaine Longden-Chapman for suggesting publication in Journal of Physics: Conference Series.

  9. Modular Approaches to Earth Science Scientific Computing: 3D Electromagnetic Induction Modeling as an Example

    NASA Astrophysics Data System (ADS)

    Tandon, K.; Egbert, G.; Siripunvaraporn, W.

    2003-12-01

    We are developing a modular system for three-dimensional inversion of electromagnetic (EM) induction data, using an object oriented programming approach. This approach allows us to modify the individual components of the inversion scheme proposed, and also reuse the components for variety of problems in earth science computing howsoever diverse they might be. In particular, the modularity allows us to (a) change modeling codes independently of inversion algorithm details; (b) experiment with new inversion algorithms; and (c) modify the way prior information is imposed in the inversion to test competing hypothesis and techniques required to solve an earth science problem. Our initial code development is for EM induction equations on a staggered grid, using iterative solution techniques in 3D. An example illustrated here is an experiment with the sensitivity of 3D magnetotelluric inversion to uncertainties in the boundary conditions required for regional induction problems. These boundary conditions should reflect the large-scale geoelectric structure of the study area, which is usually poorly constrained. In general for inversion of MT data, one fixes boundary conditions at the edge of the model domain, and adjusts the earth?s conductivity structure within the modeling domain. Allowing for errors in specification of the open boundary values is simple in principle, but no existing inversion codes that we are aware of have this feature. Adding a feature such as this is straightforward within the context of the modular approach. More generally, a modular approach provides an efficient methodology for setting up earth science computing problems to test various ideas. As a concrete illustration relevant to EM induction problems, we investigate the sensitivity of MT data near San Andreas Fault at Parkfield (California) to uncertainties in the regional geoelectric structure.

  10. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-04-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.

  11. 3D Structure of Iran and Surrounding Areas From The Simultaneous Inversion of Complementary Geophysical Observations

    NASA Astrophysics Data System (ADS)

    Ammon, C. J.; Maceira, M.; Cleveland, M.

    2010-12-01

    We present a three-dimensional seismic-structure model of the Arabian-Eurasian collision zone obtained via simultaneous, joint inversion of surface-wave dispersion measurements, teleseismic P-wave receiver functions, and gravity observations. We use a simple, approximate relationship between density and seismic velocities so that the three data sets may be combined in a single inversion. The sensitivity of the different data sets are well known: surface waves provide information on the smooth variations in elastic properties, receiver functions provide information on abrupt velocity contrasts, and gravity measurements provide information on broad-wavenumber shallow density variations and long-wavenumber components of deeper density structures. The combination of the data provides improved resolution of shallow-structure variations, which in turn help produce the smooth features at depth with less contamination from the strong heterogeneity often observed in the upper crust. We also explore geologically based smoothness constraints to help resolve sharp features in the underlying shallow 3D structure. Our focus is on the region surrounding Iran from east Turkey and Iraq in the west, to Pakistan and Afghanistan in the east. We use Bouguer gravity anomalies derived from the global gravity model extracted from the GRACE satellite mission. Surface-wave dispersion velocities in the period range between 7 and 150 s are taken from previously published tomographic maps for the region. Preliminary results show expected strong variations in the Caspian region as well as the deep sediment regions of the Persian Gulf. Regions constrained with receiver-function information generally show sharper crust-mantle boundary structure than that obtained by inversion of the surface waves alone (with thin layers and smoothing constraints). Final results of the simultaneous inversion will help us to better understand one of the most prominent examples of continental collision. Such models also provide an important starting model for time-consuming and fully 3D inversions.

  12. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  13. A Riemann-Hilbert approach to the inverse problem for the Stark operator on the line

    NASA Astrophysics Data System (ADS)

    Its, A.; Sukhanov, V.

    2016-05-01

    The paper is concerned with the inverse scattering problem for the Stark operator on the line with a potential from the Schwartz class. In our study of the inverse problem, we use the Riemann-Hilbert formalism. This allows us to overcome the principal technical difficulties which arise in the more traditional approaches based on the Gel’fand-Levitan-Marchenko equations, and indeed solve the problem. We also produce a complete description of the relevant scattering data (which have not been obtained in the previous works on the Stark operator) and establish the bijection between the Schwartz class potentials and the scattering data.

  14. Development of a Preventive HIV Vaccine Requires Solving Inverse Problems Which Is Unattainable by Rational Vaccine Design

    PubMed Central

    Van Regenmortel, Marc H. V.

    2018-01-01

    Hypotheses and theories are essential constituents of the scientific method. Many vaccinologists are unaware that the problems they try to solve are mostly inverse problems that consist in imagining what could bring about a desired outcome. An inverse problem starts with the result and tries to guess what are the multiple causes that could have produced it. Compared to the usual direct scientific problems that start with the causes and derive or calculate the results using deductive reasoning and known mechanisms, solving an inverse problem uses a less reliable inductive approach and requires the development of a theoretical model that may have different solutions or none at all. Unsuccessful attempts to solve inverse problems in HIV vaccinology by reductionist methods, systems biology and structure-based reverse vaccinology are described. The popular strategy known as rational vaccine design is unable to solve the multiple inverse problems faced by HIV vaccine developers. The term “rational” is derived from “rational drug design” which uses the 3D structure of a biological target for designing molecules that will selectively bind to it and inhibit its biological activity. In vaccine design, however, the word “rational” simply means that the investigator is concentrating on parts of the system for which molecular information is available. The economist and Nobel laureate Herbert Simon introduced the concept of “bounded rationality” to explain why the complexity of the world economic system makes it impossible, for instance, to predict an event like the financial crash of 2007–2008. Humans always operate under unavoidable constraints such as insufficient information, a limited capacity to process huge amounts of data and a limited amount of time available to reach a decision. Such limitations always prevent us from achieving the complete understanding and optimization of a complex system that would be needed to achieve a truly rational design process. This is why the complexity of the human immune system prevents us from rationally designing an HIV vaccine by solving inverse problems. PMID:29387066

  15. The importance of coherence in inverse problems in optics

    NASA Astrophysics Data System (ADS)

    Ferwerda, H. A.; Baltes, H. P.; Glass, A. S.; Steinle, B.

    1981-12-01

    Current inverse problems of statistical optics are presented with a guide to relevant literature. The inverse problems are categorized into four groups, and the Van Cittert-Zernike theorem and its generalization are discussed. The retrieval of structural information from the far-zone degree of coherence and the time-averaged intensity distribution of radiation scattered by a superposition of random and periodic scatterers are also discussed. In addition, formulas for the calculation of far-zone properties are derived within the framework of scalar optics, and results are applied to two examples.

  16. Comparison of iterative inverse coarse-graining methods

    NASA Astrophysics Data System (ADS)

    Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.

    2016-10-01

    Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.

  17. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  18. Variable-permittivity linear inverse problem for the H(sub z)-polarized case

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.; Chew, W. C.

    1993-01-01

    The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.

  19. On computational experiments in some inverse problems of heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2016-11-01

    The results of mathematical modeling of effective heat and mass transfer on hypersonic aircraft permeable surfaces are considered. The physic-chemical processes (the dissociation and the ionization) in laminar boundary layer of compressible gas are appreciated. Some algorithms of control restoration are suggested for the interpolation and approximation statements of heat and mass transfer inverse problems. The differences between the methods applied for the problem solutions search for these statements are discussed. Both the algorithms are realized as programs. Many computational experiments were accomplished with the use of these programs. The parameters of boundary layer obtained by means of the A.A.Dorodnicyn's generalized integral relations method from solving the direct problems have been used to obtain the inverse problems solutions. Two types of blowing laws restoration for the inverse problem in interpolation statement are presented as the examples. The influence of the temperature factor on the blowing restoration is investigated. The different character of sensitivity of controllable parameters (the local heat flow and local tangent friction) respectively to step (discrete) changing of control (the blowing) and the switching point position is studied.

  20. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  1. Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Chu, Pan; Lei, Jing

    2017-11-01

    The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.

  2. Termination Proofs for String Rewriting Systems via Inverse Match-Bounds

    NASA Technical Reports Server (NTRS)

    Butler, Ricky (Technical Monitor); Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes

    2004-01-01

    Annotating a letter by a number, one can record information about its history during a reduction. A string rewriting system is called match-bounded if there is a global upper bound to these numbers. In earlier papers we established match-boundedness as a strong sufficient criterion for both termination and preservation of regular languages. We show now that the string rewriting system whose inverse (left and right hand sides exchanged) is match-bounded, also have exceptional properties, but slightly different ones. Inverse match-bounded systems effectively preserve context-free languages; their sets of normalized strings and their sets of immortal strings are effectively regular. These sets of strings can be used to decide the normalization, the termination and the uniform termination problems of inverse match-bounded systems. We also show that the termination problem is decidable in linear time, and that a certain strong reachability problem is deciable, thus solving two open problems of McNaughton's.

  3. The inverse Wiener polarity index problem for chemical trees.

    PubMed

    Du, Zhibin; Ali, Akbar

    2018-01-01

    The Wiener polarity number (which, nowadays, known as the Wiener polarity index and usually denoted by Wp) was devised by the chemist Harold Wiener, for predicting the boiling points of alkanes. The index Wp of chemical trees (chemical graphs representing alkanes) is defined as the number of unordered pairs of vertices (carbon atoms) at distance 3. The inverse problems based on some well-known topological indices have already been addressed in the literature. The solution of such inverse problems may be helpful in speeding up the discovery of lead compounds having the desired properties. This paper is devoted to solving a stronger version of the inverse problem based on Wiener polarity index for chemical trees. More precisely, it is proved that for every integer t ∈ {n - 3, n - 2,…,3n - 16, 3n - 15}, n ≥ 6, there exists an n-vertex chemical tree T such that Wp(T) = t.

  4. Assimilating data into open ocean tidal models

    NASA Astrophysics Data System (ADS)

    Kivman, Gennady A.

    The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.

  5. Upper mantle structure beneath southern African cratons from seismic finite-frequency P- and S-body wave tomography

    NASA Astrophysics Data System (ADS)

    Youssof, M.; Thybo, H.; Artemieva, I. M.; Levander, A.

    2015-06-01

    We present a 3D high-resolution seismic model of the southern African cratonic region from teleseismic tomographic inversion of the P- and S-body wave dataset recorded by the Southern African Seismic Experiment (SASE). Utilizing 3D sensitivity kernels, we invert traveltime residuals of teleseismic body waves to calculate velocity anomalies in the upper mantle down to a 700 km depth with respect to the ak135 reference model. Various resolution tests allow evaluation of the extent of smearing effects and help defining the optimum inversion parameters (i.e., damping and smoothness) for regularizing the inversion calculations. The fast lithospheric keels of the Kaapvaal and Zimbabwe cratons reach depths of 300-350 km and 200-250 km, respectively. The paleo-orogenic Limpopo Belt is represented by negative velocity perturbations down to a depth of ˜ 250 km, implying the presence of chemically fertile material with anomalously low wave speeds. The Bushveld Complex has low velocity down to ˜ 150 km, which is attributed to chemical modification of the cratonic mantle. In the present model, the finite-frequency sensitivity kernels allow to resolve relatively small-scale anomalies, such as the Colesberg Magnetic Lineament in the suture zone between the eastern and western blocks of the Kaapvaal Craton, and a small northern block of the Kaapvaal Craton, located between the Limpopo Belt and the Bushveld Complex.

  6. Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Liu, Feng (Inventor); Lax, Melvin (Inventor); Das, Bidyut B. (Inventor)

    1999-01-01

    A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: ##EQU1## wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise, to fluctuations in the absorption (or diffusion) X.sub.j that we are trying to determine: .LAMBDA..sub.ij =.lambda..sub.j .delta..sub.ij with .lambda..sub.j =/<.DELTA.Xj.DELTA.Xj> Y is the data collected at the detectors, and X.sup.k is the kth iterate toward the desired absoption information. An algorithm, which combines a two dimensional (2D) matrix inversion with a one-dimensional (1D) Fourier transform inversion is used to obtain images of three dimensional hidden objects in turbid scattering media.

  7. Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Gayen, Swapan K. (Inventor)

    2000-01-01

    A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise, to fluctuations in the absorption (or diffusion) X.sub.j that we are trying to determine: .LAMBDA..sub.ij =.lambda..sub.j .delta..sub.ij with .lambda..sub.j =/<.DELTA.Xj.DELTA.Xj> Y is the data collected at the detectors, and X.sup.k is the kth iterate toward the desired absorption information. An algorithm, which combines a two dimensional (2D) matrix inversion with a one-dimensional (1D) Fourier transform inversion is used to obtain images of three dimensional hidden objects in turbid scattering media.

  8. Study of structural change in volcanic and geothermal areas using seismic tomography

    NASA Astrophysics Data System (ADS)

    Mhana, Najwa; Foulger, Gillian; Julian, Bruce; peirce, Christine

    2014-05-01

    Long Valley caldera is a large silicic volcano. It has been in a state of volcanic and seismic unrest since 1978. Farther escalation of this unrest could pose a threat to the 5,000 residents and the tens of thousands of tourists who visit the area. We have studied the crustal structure beneath 28 km X 16 km area using seismic tomography. We performed tomographic inversions for the years 2009 and 2010 with a view to differencing it with the 1997 result to look for structural changes with time and whether repeat tomography is a capable of determining the changes in structure in volcanic and geothermal reservoirs. Thus, it might provide a useful tool to monitoring physical changes in volcanoes and exploited geothermal reservoirs. Up to 600 earthquakes, selected from the best-quality events, were used for the inversion. The inversions were performed using program simulps12 [Thurber, 1983]. Our initial results show that changes in both V p and V s were consistent with the migration of CO2 into the upper 2 km or so. Our ongoing work will also invert pairs of years simultaneously using a new program, tomo4d [Julian and Foulger, 2010]. This program inverts for the differences in structure between two epochs so it can provide a more reliable measure of structural change than simply differencing the results of individual years.

  9. Density-to-Potential Inversions to Guide Development of Exchange-Correlation Approximations at Finite Temperature

    NASA Astrophysics Data System (ADS)

    Jensen, Daniel; Wasserman, Adam; Baczewski, Andrew

    The construction of approximations to the exchange-correlation potential for warm dense matter (WDM) is a topic of significant recent interest. In this work, we study the inverse problem of Kohn-Sham (KS) DFT as a means of guiding functional design at zero temperature and in WDM. Whereas the forward problem solves the KS equations to produce a density from a specified exchange-correlation potential, the inverse problem seeks to construct the exchange-correlation potential from specified densities. These two problems require different computational methods and convergence criteria despite sharing the same mathematical equations. We present two new inversion methods based on constrained variational and PDE-constrained optimization methods. We adapt these methods to finite temperature calculations to reveal the exchange-correlation potential's temperature dependence in WDM-relevant conditions. The different inversion methods presented are applied to both non-interacting and interacting model systems for comparison. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94.

  10. Imaging Performance of Quantitative Transmission Ultrasound

    PubMed Central

    Lenox, Mark W.; Wiskin, James; Lewis, Matthew A.; Darrouzet, Stephen; Borup, David; Hsieh, Scott

    2015-01-01

    Quantitative Transmission Ultrasound (QTUS) is a tomographic transmission ultrasound modality that is capable of generating 3D speed-of-sound maps of objects in the field of view. It performs this measurement by propagating a plane wave through the medium from a transmitter on one side of a water tank to a high resolution receiver on the opposite side. This information is then used via inverse scattering to compute a speed map. In addition, the presence of reflection transducers allows the creation of a high resolution, spatially compounded reflection map that is natively coregistered to the speed map. A prototype QTUS system was evaluated for measurement and geometric accuracy as well as for the ability to correctly determine speed of sound. PMID:26604918

  11. FT3D: three-dimensional Fourier analysis on small Unix workstations for electron microscopy and tomographic studies.

    PubMed

    Lanzavecchia, S; Bellon, P L; Tosoni, L

    1993-12-01

    FT3D is a self-contained package of tools for three-dimensional Fourier analysis, written in the C language for Unix workstations. It can evaluate direct transforms of three-dimensional real functions, inverse transforms, auto- and cross-correlations and spectra. The library has been developed to support three-dimensional reconstructions of biological structures from projections obtained in the electron microscope. This paper discusses some features of the library, which has been implemented in such a way as to profit from the resources of modern workstations. A table of elapsed times for jobs of different dimensions with different RAM buffers is reported for the particular hardware used in the authors' laboratory.

  12. Living specimen tomography by digital holographic microscopy: morphometry of testate amoeba

    NASA Astrophysics Data System (ADS)

    Charrière, Florian; Pavillon, Nicolas; Colomb, Tristan; Depeursinge, Christian; Heger, Thierry J.; Mitchell, Edward A. D.; Marquet, Pierre; Rappaz, Benjamin

    2006-08-01

    This paper presents an optical diffraction tomography technique based on digital holographic microscopy. Quantitative 2-dimensional phase images are acquired for regularly-spaced angular positions of the specimen covering a total angle of π, allowing to built 3-dimensional quantitative refractive index distributions by an inverse Radon transform. A 20x magnification allows a resolution better than 3 μm in all three dimensions, with accuracy better than 0.01 for the refractive index measurements. This technique is for the first time to our knowledge applied to living specimen (testate amoeba, Protista). Morphometric measurements are extracted from the tomographic reconstructions, showing that the commonly used method for testate amoeba biovolume evaluation leads to systematic under evaluations by about 50%.

  13. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    PubMed

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.

  14. A function space framework for structural total variation regularization with applications in inverse problems

    NASA Astrophysics Data System (ADS)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  15. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  16. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  17. Advanced Covariance-Based Stochastic Inversion and Neuro-Genetic Optimization for Rosetta CONSERT Radar Data to Improve Spatial Resolution of Multi-Fractal Depth Profiles for Cometary Nucleus

    NASA Astrophysics Data System (ADS)

    Edenhofer, Peter; Ulamec, Stephan

    2015-04-01

    The paper is devoted to results of doctoral research work at University of Bochum as applied to the radar transmission experiment CONSERT of the ESA cometary mission Rosetta. This research aims at achieving the limits of optimum spatial (and temporal) resolution for radar remote sensing by implementation of covariance informations concerned with error-balanced control as well as coherence of wave propagation effects through random composite media involved (based on Joel Franklin's approach of extended stochastic inversion). As a consequence the well-known inherent numerical instabilities of remote sensing are significantly reduced in a robust way by increasing the weight of main diagonal elements of the resulting composite matrix to be inverted with respect to off-diagonal elements following synergy relations as to the principle of correlation receiver in wireless telecommunications. It is shown that the enhancement of resolution for remote sensing holds for an integral and differential equation approach of inversion as well. In addition to that the paper presents a discussion on how the efficiency of inversion for radar data gets achieved by an overall optimization of inversion due to a novel neuro-genetic approach. Such kind of approach is in synergy with the priority research program "Organic Computing" of DFG / German Research Organization. This Neuro-Genetic Optimization (NGO) turns out, firstly, to take into account more detailed physical informations supporting further improved resolution such as the process of accretion for cometary nucleus, wave propagation effects from rough surfaces, ground clutter, nonlinear focusing, etc. as well as, secondly, to accelerate the computing process of inversion in a really significantly enhanced and fast way, e.g., enabling online-control of autonomous processes such as detection of unknown objects, navigation, etc. The paper describes in some detail how this neuro-genetic approach of optimization is incorporated into the procedure of data inversion by combining inverted artificial neural networks of adequately chosen topology and learning routines for short access times with the concept of genetic algorithms enabling to achieve a multi-dimensional global optimum subject to a properly constructed and problem-oriented target function, ensemble selection rules, etc. Finally the paper discusses how the power of realistic simulation of the structures of the interior of a cometary nucleus can be improved by applying Benoit Mandelbrot's concept of fractal structures. It is shown how the fractal volumetric modelling of the nucleus of a comet can be accomplished by finite 3D elements of flexibility (serving topography and morphology as well) such as of tetrahedron shape with specific scaling factors of self similarity and a Maxwellian type of distribution function. By applying the widely accepted fBm-concept of fractal Brownian motion basically each of the corresponding Hurst exponents 0 (rough) < H < 1 (smooth) can be derived for the multi-fractal depth (and terrain) profiles of the equivalent dielectric constant per tomographic angular orbital segment of intersection by transmissive radar ray paths with the nucleus of the comet. Cooperative efforts and work are in progress to achieve numerical results of depth profiles for the nucleus of comet 67P/Churyumov-Gerasimenko.

  18. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  19. An ambiguity of information content and error in an ill-posed satellite inversion

    NASA Astrophysics Data System (ADS)

    Koner, Prabhat

    According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.

  20. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  1. IPDO-2007: Inverse Problems, Design and Optimization Symposium

    DTIC Science & Technology

    2007-08-01

    Kanevce, G. H., Kanevce, Lj. P., and Mitrevski , V. B.), International Symposium on Inverse Problems, Design and Optimization (IPDO-2007), (eds...107 Gligor Kanevce Ljubica Kanevce Vangelce Mitrevski George Dulikravich 108 Gligor Kanevce Ljubica Kanevce Igor Andreevski George Dulikravich

  2. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm.

    PubMed

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim

    2017-06-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.

  3. Global Discontinuity Structure of the Mantle Transition Zone from Finite-Frequency Tomography of SS Precursors

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Zhou, Y.

    2017-12-01

    We report global structure of the 410-km and 660-km discontinuities from finite-frequency tomography using frequency-dependent traveltime measurements of SS precursors recorded at the Global Seismological Network (GSN). Finite-frequency sensitivity kernels for discontinuity depth perturbations are calculated in the framework of traveling-wave mode coupling. We parametrize the global discontinuities using a set of spherical triangular grid points and solve the tomographic inverse problem based on singular value decomposition. Our global 410-km and 660-km discontinuity models reveal distinctly different characteristics beneath the oceans and subduction zones. In general, oceanic regions are associated with a thinner mantle transition zone and depth perturbations of the 410-km and 660-km discontinuities are anti-correlated, in agreement with a thermal origin and an overall warm and dry mantle beneath the oceans. The perturbations are not uniform throughout the oceans but show strong small-scale variations, indicating complex processes in the mantle transition zone. In major subduction zones (except for South America where data coverage is sparse), depth perturbations of the 410-km and 660-km discontinuities are correlated, with both the 410-km and the 660-km discontinuities occurring at greater depths. The distributions of the anomalies are consistent with cold stagnant slabs just above the 660-km discontinuity and ascending return flows in a superadiabatic upper mantle.

  4. Dependence of image quality on image operator and noise for optical diffusion tomography

    NASA Astrophysics Data System (ADS)

    Chang, Jenghwa; Graber, Harry L.; Barbour, Randall L.

    1998-04-01

    By applying linear perturbation theory to the radiation transport equation, the inverse problem of optical diffusion tomography can be reduced to a set of linear equations, W(mu) equals R, where W is the weight function, (mu) are the cross- section perturbations to be imaged, and R is the detector readings perturbations. We have studied the dependence of image quality on added systematic error and/or random noise in W and R. Tomographic data were collected from cylindrical phantoms, with and without added inclusions, using Monte Carlo methods. Image reconstruction was accomplished using a constrained conjugate gradient descent method. Result show that accurate images containing few artifacts are obtained when W is derived from a reference states whose optical thickness matches that of the unknown teste medium. Comparable image quality was also obtained for unmatched W, but the location of the target becomes more inaccurate as the mismatch increases. Results of the noise study show that image quality is much more sensitive to noise in W than in R, and the impact of noise increase with the number of iterations. Images reconstructed after pure noise was substituted for R consistently contain large peaks clustered about the cylinder axis, which was an initially unexpected structure. In other words, random input produces a non- random output. This finding suggests that algorithms sensitive to the evolution of this feature could be developed to suppress noise effects.

  5. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm

    PubMed Central

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim

    2018-01-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035

  6. A preprocessing strategy for helioseismic inversions

    NASA Astrophysics Data System (ADS)

    Christensen-Dalsgaard, J.; Thompson, M. J.

    1993-05-01

    Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.

  7. Weak unique continuation property and a related inverse source problem for time-fractional diffusion-advection equations

    NASA Astrophysics Data System (ADS)

    Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro

    2017-05-01

    In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.

  8. Tomographic imaging of the shallow crustal structure of the East Pacific Rise at 9 deg 30 min N

    NASA Astrophysics Data System (ADS)

    Toomey, Douglas R.; Solomon, Sean C.; Purdy, G. M.

    1994-12-01

    Compressional wave travel times from a seismic tomography experiment at 9 deg 30 min N on the East Pacific Rise are analyzed by a new tomographic method to determine the three-dimensional seismic velocity structure of the upper 2.5 km of oceanic crust within a 20 x 18 km area centered on the rise axis. The data comprise the travel times and associated uncertainties of 1459 compressional waves that have propagated above the axial magma chamber. A careful analysis of source and receiver parameters, in conjunction with an automated method of picking P wave onsets and assigning uncertainties, constrains the prior uncertainty in the data to 5 to 20 ms. The new tomographic method employs graph theory to estimate ray paths and travel times through strongly heterogeneous and densely parameterized seismic velocity models. The nonlinear inverse method uses a jumping strategy to minimize a functional that includes the penalty function, horizontal and vertical smoothing constraints, and prior model assumptions; all constraints applied to model perturbations are normalized to remove bias. We use the tomographic method to reject the null hypothesis that the axial seismic structure is two-dimensional. Three-dimensional models reveal a seismic structure that correlates well with cross- and along-axis variations in seafloor morphology, the location of the axial summit caldera, and the distribution of seafloor hydrothermal activity. The along-axis segmentation of the seismic structure above the axial magma chamber is consistent with the hypothesis that mantle-derived melt is preferentially injected midway along a locally linear segment of the rise and that the architecture of the crustal section is characterized by an en echelon series of elongate axial volcanoes approximately 10 km in length. The seismic data are compatible with a 300- to 500-m-thick thermal anomaly above a midcrustal melt lens; such an interpretation suggests that hydrothermal fluids may not have penetrated this region in the last 10(exp 3) years. Asymmetries in the seismic structure across the rise support the inferences that the thickness of seismic layer 2 and the average midcrustal temperature increase to the west of the rise axis. These anomalies may be the result of off-axis magmatism; alternatively, the asymmetric thermal anomaly may be the consequence of differences in the depth extent of hydrothermal cooling.

  9. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE PAGES

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    2017-10-29

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  10. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  11. Final Report: Resolving and Discriminating Overlapping Anomalies from Multiple Objects in Cluttered Environments

    DTIC Science & Technology

    2015-12-15

    UXO community . NAME Total Number: PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Irma Shamatava 0.50 0.50 1 Resolving and Discriminating...Distinguishing an object of interest from innocuous items is the main problem that the UXO community is facing currently. This inverse problem...innocuous items is the main problem that the UXO community is facing currently. This inverse problem demands fast and accurate representation of

  12. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  13. Equation for wave processes in inhomogeneous moving media and functional solution of the acoustic tomography problem based on it

    NASA Astrophysics Data System (ADS)

    Rumyantseva, O. D.; Shurup, A. S.

    2017-01-01

    The paper considers the derivation of the wave equation and Helmholtz equation for solving the tomographic problem of reconstruction combined scalar-vector inhomogeneities describing perturbations of the sound velocity and absorption, the vector field of flows, and perturbations of the density of the medium. Restrictive conditions under which the obtained equations are meaningful are analyzed. Results of numerical simulation of the two-dimensional functional-analytical Novikov-Agaltsov algorithm for reconstructing the flow velocity using the the obtained Helmholtz equation are presented.

  14. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  15. Effective one-dimensional approach to the source reconstruction problem of three-dimensional inverse optoacoustics

    NASA Astrophysics Data System (ADS)

    Stritzel, J.; Melchert, O.; Wollweber, M.; Roth, B.

    2017-09-01

    The direct problem of optoacoustic signal generation in biological media consists of solving an inhomogeneous three-dimensional (3D) wave equation for an initial acoustic stress profile. In contrast, the more defiant inverse problem requires the reconstruction of the initial stress profile from a proper set of observed signals. In this article, we consider an effectively 1D approach, based on the assumption of a Gaussian transverse irradiation source profile and plane acoustic waves, in which the effects of acoustic diffraction are described in terms of a linear integral equation. The respective inverse problem along the beam axis can be cast into a Volterra integral equation of the second kind for which we explore here efficient numerical schemes in order to reconstruct initial stress profiles from observed signals, constituting a methodical progress of computational aspects of optoacoustics. In this regard, we explore the validity as well as the limits of the inversion scheme via numerical experiments, with parameters geared toward actual optoacoustic problem instances. The considered inversion input consists of synthetic data, obtained in terms of the effectively 1D approach, and, more generally, a solution of the 3D optoacoustic wave equation. Finally, we also analyze the effect of noise and different detector-to-sample distances on the optoacoustic signal and the reconstructed pressure profiles.

  16. The inverse problem of sensing the mass and force induced by an adsorbate on a beam nanomechanical resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yun; Zhang, Yin

    2016-06-08

    The mass sensing superiority of a micro/nanomechanical resonator sensor over conventional mass spectrometry has been, or at least, is being firmly established. Because the sensing mechanism of a mechanical resonator sensor is the shifts of resonant frequencies, how to link the shifts of resonant frequencies with the material properties of an analyte formulates an inverse problem. Besides the analyte/adsorbate mass, many other factors such as position and axial force can also cause the shifts of resonant frequencies. The in-situ measurement of the adsorbate position and axial force is extremely difficult if not impossible, especially when an adsorbate is as smallmore » as a molecule or an atom. Extra instruments are also required. In this study, an inverse problem of using three resonant frequencies to determine the mass, position and axial force is formulated and solved. The accuracy of the inverse problem solving method is demonstrated and how the method can be used in the real application of a nanomechanical resonator is also discussed. Solving the inverse problem is helpful to the development and application of mechanical resonator sensor on two things: reducing extra experimental equipments and achieving better mass sensing by considering more factors.« less

  17. X-ray tomography as a powerful method for zinc-air battery research

    NASA Astrophysics Data System (ADS)

    Franke-Lang, Robert; Arlt, Tobias; Manke, Ingo; Kowal, Julia

    2017-12-01

    X-ray tomography is used to investigate material redistribution and effects of electrochemical reactions in a zinc-air battery in-situ. For this, a special battery set-up is developed which meets tomographic and electrochemical requirements. The prepared batteries are discharged and some of them have partially been charged. To analyse the three-dimensional structure of the zinc and air electrode a tomographic measurement is made in charge and discharge condition without disassembling the battery. X-ray tomography gives the opportunity to detect and analyse three different effects within the cell operation: tracking the morphology and transformation of zinc and air electrode, monitoring electrolyte decomposition and movement, finding electrical misbehaviour by parasitic reactions. Therefore, it is possible to identify the loss of capacity and major problems of cyclability. The electrolyte strongly reacts with the pure zinc that leads to gassing and a loss of electrolyte. The loss prevents a charge carrier exchange between the anode and the cathode and reduces the theoretical capacity. One of the chemical reaction produces hydroxylated zinc, namely zincate. The most crucial problems with cyclability are affected by zincate movement into the catalyst layer. This assumption is confirmed by finding pure zinc areas within the catalyst layer.

  18. Linear single-step image reconstruction in the presence of nonscattering regions.

    PubMed

    Dehghani, H; Delpy, D T

    2002-06-01

    There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.

  19. Linear single-step image reconstruction in the presence of nonscattering regions

    NASA Astrophysics Data System (ADS)

    Dehghani, H.; Delpy, D. T.

    2002-06-01

    There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.

  20. Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Jakobsen, Morten; Tveit, Svenn

    2018-05-01

    We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.

  1. Viscoelastic material inversion using Sierra-SD and ROL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Timothy; Aquino, Wilkins; Ridzal, Denis

    2014-11-01

    In this report we derive frequency-domain methods for inverse characterization of the constitutive parameters of viscoelastic materials. The inverse problem is cast in a PDE-constrained optimization framework with efficient computation of gradients and Hessian vector products through matrix free operations. The abstract optimization operators for first and second derivatives are derived from first principles. Various methods from the Rapid Optimization Library (ROL) are tested on the viscoelastic inversion problem. The methods described herein are applied to compute the viscoelastic bulk and shear moduli of a foam block model, which was recently used in experimental testing for viscoelastic property characterization.

  2. An optimization method for the problems of thermal cloaking of material bodies

    NASA Astrophysics Data System (ADS)

    Alekseev, G. V.; Levin, V. A.

    2016-11-01

    Inverse heat-transfer problems related to constructing special thermal devices such as cloaking shells, thermal-illusion or thermal-camouflage devices, and heat-flux concentrators are studied. The heatdiffusion equation with a variable heat-conductivity coefficient is used as the initial heat-transfer model. An optimization method is used to reduce the above inverse problems to the respective control problem. The solvability of the above control problem is proved, an optimality system that describes necessary extremum conditions is derived, and a numerical algorithm for solving the control problem is proposed.

  3. Complete Sets of Radiating and Nonradiating Parts of a Source and Their Fields with Applications in Inverse Scattering Limited-Angle Problems

    PubMed Central

    Louis, A. K.

    2006-01-01

    Many algorithms applied in inverse scattering problems use source-field systems instead of the direct computation of the unknown scatterer. It is well known that the resulting source problem does not have a unique solution, since certain parts of the source totally vanish outside of the reconstruction area. This paper provides for the two-dimensional case special sets of functions, which include all radiating and all nonradiating parts of the source. These sets are used to solve an acoustic inverse problem in two steps. The problem under discussion consists of determining an inhomogeneous obstacle supported in a part of a disc, from data, known for a subset of a two-dimensional circle. In a first step, the radiating parts are computed by solving a linear problem. The second step is nonlinear and consists of determining the nonradiating parts. PMID:23165060

  4. Reconstruction of local perturbations in periodic surfaces

    NASA Astrophysics Data System (ADS)

    Lechleiter, Armin; Zhang, Ruming

    2018-03-01

    This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.

  5. New Insights on Mt. Etna's Crust and Relationship with the Regional Tectonic Framework from Joint Active and Passive P-Wave Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Díaz-Moreno, A.; Barberi, G.; Cocina, O.; Koulakov, I.; Scarfì, L.; Zuccarello, L.; Prudencio, J.; García-Yeguas, A.; Álvarez, I.; García, L.; Ibáñez, J. M.

    2018-01-01

    In the Central Mediterranean region, the production of chemically diverse volcanic products (e.g., those from Mt. Etna and the Aeolian Islands archipelago) testifies to the complexity of the tectonic and geodynamic setting. Despite the large number of studies that have focused on this area, the relationships among volcanism, tectonics, magma ascent, and geodynamic processes remain poorly understood. We present a tomographic inversion of P-wave velocity using active and passive sources. Seismic signals were recorded using both temporary on-land and ocean bottom seismometers and data from a permanent local seismic network consisting of 267 seismic stations. Active seismic signals were generated using air gun shots mounted on the Spanish Oceanographic Vessel `Sarmiento de Gamboa'. Passive seismic sources were obtained from 452 local earthquakes recorded over a 4-month period. In total, 184,797 active P-phase and 11,802 passive P-phase first arrivals were inverted to provide three different velocity models. Our results include the first crustal seismic active tomography for the northern Sicily area, including the Peloritan-southern Calabria region and both the Mt. Etna and Aeolian volcanic environments. The tomographic images provide a detailed and complete regional seismotectonic framework and highlight a spatially heterogeneous tectonic regime, which is consistent with and extends the findings of previous models. One of our most significant results was a tomographic map extending to 14 km depth showing a discontinuity striking roughly NW-SE, extending from the Gulf of Patti to the Ionian Sea, south-east of Capo Taormina, corresponding to the Aeolian-Tindari-Letojanni fault system, a regional deformation belt. Moreover, for the first time, we observed a high-velocity anomaly located in the south-eastern sector of the Mt. Etna region, offshore of the Timpe area, which is compatible with the plumbing system of an ancient shield volcano located offshore of Mt. Etna.

  6. High-Resolution Imaging of Axial Volcano, Juan de Fuca ridge.

    NASA Astrophysics Data System (ADS)

    Arnulf, A. F.; Harding, A. J.; Kent, G. M.

    2014-12-01

    To date, seismic experiments have been key in our understanding of the internal structure of volcanic systems. However, most experiments, especially subaerial-based, are often restricted to refraction geometries with limited numbers of sources and receivers, and employ smoothing constraints required by tomographic inversions that produce smoothed and blurry images with spatial resolutions well below the length scale of important features that define these magmatic systems. Taking advantage of the high density of sources and receivers from multichannel seismic (MCS) data should, in principle, allow detailed images of velocity and reflectivity to be recovered. Unfortunately, the depth of mid-ocean ridges has the detrimental effect of concealing critical velocity information behind the seafloor reflection, preventing first arrival travel-time tomographic approaches from imaging the shallowest and most heterogeneous part of the crust. To overcome the limitations of the acquisition geometry, here we are using an innovative multistep approach. We combine a synthetic ocean bottom experiment (SOBE), 3-D traveltime tomography, 2D elastic full waveform and a reverse time migration (RTM) formalism, and present one of the most detailed imagery to date of a massive and complex magmatic system beneath Axial seamount, an active submarine volcano that lies at the intersection of the Juan de Fuca ridge and the Cobb-Eickelberg seamount chain. We present high-resolution images along 12 seismic lines that span the volcano. We refine the extent/volume of the main crustal magma reservoir that lies beneath the central caldera. We investigate the extent, volume and physical state of a secondary magma body present to the southwest and study its connections with the main magma reservoir. Additionally, we present a 3D tomographic model of the entire volcano that reveals a subsiding caldera floor that provides a near perfect trap for the ponding of lava flows, supporting a "trapdoor" mechanism for caldera formation. Finally, we show that crustal aging (increase in layer 2A velocity with age) is controlled by pipe-like pattern of focused hydrothermal mineralization.

  7. Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel

    ERIC Educational Resources Information Center

    El-Gebeily, M.; Yushau, B.

    2008-01-01

    In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…

  8. Frechet derivatives for shallow water ocean acoustic inverse problems

    NASA Astrophysics Data System (ADS)

    Odom, Robert I.

    2003-04-01

    For any inverse problem, finding a model fitting the data is only half the problem. Most inverse problems of interest in ocean acoustics yield nonunique model solutions, and involve inevitable trade-offs between model and data resolution and variance. Problems of uniqueness and resolution and variance trade-offs can be addressed by examining the Frechet derivatives of the model-data functional with respect to the model variables. Tarantola [Inverse Problem Theory (Elsevier, Amsterdam, 1987), p. 613] published analytical formulas for the basic derivatives, e.g., derivatives of pressure with respect to elastic moduli and density. Other derivatives of interest, such as the derivative of transmission loss with respect to attenuation, can be easily constructed using the chain rule. For a range independent medium the analytical formulas involve only the Green's function and the vertical derivative of the Green's function for the medium. A crucial advantage of the analytical formulas for the Frechet derivatives over numerical differencing is that they can be computed with a single pass of any program which supplies the Green's function. Various derivatives of interest in shallow water ocean acoustics are presented and illustrated by an application to the sensitivity of measured pressure to shallow water sediment properties. [Work supported by ONR.

  9. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  10. Identification of groundwater flow parameters using reciprocal data from hydraulic interference tests

    NASA Astrophysics Data System (ADS)

    Marinoni, Marianna; Delay, Frederick; Ackerer, Philippe; Riva, Monica; Guadagnini, Alberto

    2016-08-01

    We investigate the effect of considering reciprocal drawdown curves for the characterization of hydraulic properties of aquifer systems through inverse modeling based on interference well testing. Reciprocity implies that drawdown observed in a well B when pumping takes place from well A should strictly coincide with the drawdown observed in A when pumping in B with the same flow rate as in A. In this context, a critical point related to applications of hydraulic tomography is the assessment of the number of available independent drawdown data and their impact on the solution of the inverse problem. The issue arises when inverse modeling relies upon mathematical formulations of the classical single-continuum approach to flow in porous media grounded on Darcy's law. In these cases, introducing reciprocal drawdown curves in the database of an inverse problem is equivalent to duplicate some information, to a certain extent. We present a theoretical analysis of the way a Least-Square objective function and a Levenberg-Marquardt minimization algorithm are affected by the introduction of reciprocal information in the inverse problem. We also investigate the way these reciprocal data, eventually corrupted by measurement errors, influence model parameter identification in terms of: (a) the convergence of the inverse model, (b) the optimal values of parameter estimates, and (c) the associated estimation uncertainty. Our theoretical findings are exemplified through a suite of computational examples focused on block-heterogeneous systems with increased complexity level. We find that the introduction of noisy reciprocal information in the objective function of the inverse problem has a very limited influence on the optimal parameter estimates. Convergence of the inverse problem improves when adding diverse (nonreciprocal) drawdown series, but does not improve when reciprocal information is added to condition the flow model. The uncertainty on optimal parameter estimates is influenced by the strength of measurement errors and it is not significantly diminished or increased by adding noisy reciprocal information.

  11. Combining 3D Hydraulic Tomography with Tracer Tests for Improved Transport Characterization.

    PubMed

    Sanchez-León, E; Leven, C; Haslauer, C P; Cirpka, O A

    2016-07-01

    Hydraulic tomography (HT) is a method for resolving the spatial distribution of hydraulic parameters to some extent, but many details important for solute transport usually remain unresolved. We present a methodology to improve solute transport predictions by combining data from HT with the breakthrough curve (BTC) of a single forced-gradient tracer test. We estimated the three dimensional (3D) hydraulic-conductivity field in an alluvial aquifer by inverting tomographic pumping tests performed at the Hydrogeological Research Site Lauswiesen close to Tübingen, Germany, using a regularized pilot-point method. We compared the estimated parameter field to available profiles of hydraulic-conductivity variations from direct-push injection logging (DPIL), and validated the hydraulic-conductivity field with hydraulic-head measurements of tests not used in the inversion. After validation, spatially uniform parameters for dual-domain transport were estimated by fitting tracer data collected during a forced-gradient tracer test. The dual-domain assumption was used to parameterize effects of the unresolved heterogeneity of the aquifer and deemed necessary to fit the shape of the BTC using reasonable parameter values. The estimated hydraulic-conductivity field and transport parameters were subsequently used to successfully predict a second independent tracer test. Our work provides an efficient and practical approach to predict solute transport in heterogeneous aquifers without performing elaborate field tracer tests with a tomographic layout. © 2015, National Ground Water Association.

  12. Three-dimensional crustal structure of Long Valley caldera, California, and evidence for the migration of CO2 under Mammoth Mountain

    USGS Publications Warehouse

    Foulger, G.R.; Julian, B.R.; Pitt, A.M.; Hill, D.P.; Malin, P.E.; Shalev, E.

    2003-01-01

    A temporary network of 69 three-component seismic stations captured a major seismic sequence in Long Valley caldera in 1997. We performed a tomographic inversion for crustal structure beneath a 28 km ?? 16 km area encompassing part of the resurgent dome, the south moat, and Mammoth Mountain. Resolution of crustal structure beneath the center of the study volume was good down to ???3 km below sea level (???5 km below the surface). Relatively high wave speeds are associated with the Bishop Tuff and lower wave speeds characterize debris in the surrounding moat. A low-Vp/Vs anomaly extending from near the surface to ???1 km below sea level beneath Mammoth Mountain may represent a CO2 reservoir that is supplying CO2-rich springs, venting at the surface, and killing trees. We investigated temporal variations in structure beneath Mammoth Mountain by differencing our results with tomographic images obtained using data from 1989/1990. Significant changes in both Vp and Vs were consistent with the migration of CO2 into the upper 2 km or so beneath Mammoth Mountain and its depletion in peripheral volumes that correlate with surface venting areas. Repeat tomography is capable of detecting the migration of gas beneath active silicic volcanoes and may thus provide a useful volcano monitoring tool.

  13. Confirmation of a change in the global shear velocity pattern at around 1,000 km depth

    NASA Astrophysics Data System (ADS)

    Debayle, E.; Durand, S.; Ricard, Y. R.; Zaroli, C.; Lambotte, S.

    2017-12-01

    In this study, we confirm the existence of a change in the shear velocity spectrum around 1,000 km depth based on a new shear velocity tomographic model of the Earth's mantle, SEISGLOB2. This model is based on Rayleigh surface wave phase velocities, self- and cross- coupling structure coefficients of spheroidal normal modes and body wave travel times which are, for the first time, combined in a tomographic inversion. SEISGLOB2 is developed up to spherical harmonic degree 40 and in 21 radial spline functions. The spectrum of SEISGLOB2 is the flattest (i.e., richest in "short" wavelengths corresponding to spherical harmonic degrees greater than 10) around 1,000 km depth and this flattening occurs between 670 and 1,500 km depth. We also confirm various changes in the continuity of slabs and mantle plumes all around 1,000 km depth where we also observed the upper boundary of LLSVPs. The existence of a flatter spectrum, richer in short wavelength heterogeneities, in a region of the mid-mantle can have great impacts on our understanding of the mantle dynamics and should thus be better understood in the future. Although a viscosity increase, a phase change or a compositional change can all concur to induce this change of pattern, its precise origin is still very uncertain.

  14. Crustal seismic structure beneath the Deccan Traps area (Gujarat, India), from local travel-time tomography

    NASA Astrophysics Data System (ADS)

    Prajapati, Srichand; Kukarina, Ekaterina; Mishra, Santosh

    2016-03-01

    The Gujarat region in western India is known for its intra-plate seismic activity, including the Mw 7.7 Bhuj earthquake, a reverse-faulting event that reactivated normal faults of the Mesozoic Kachchh rift zone. The Late Cretaceous Deccan Traps, one of the largest igneous provinces on the Earth, cover the southern part of Gujarat. This study is aimed at bringing light to the crustal rift zone structure and likely origin of the Traps based on the velocity structure of the crust beneath Gujarat. Tomographic inversion of the Gujarat region was done using the non-linear, passive-source tomographic algorithm, LOTOS. We use high-quality arrival times of 22,280 P and 22,040 S waves from 3555 events recorded from August 2006 to May 2011 at 83 permanent and temporary stations installed in Gujarat state by the Institute of Seismological Research (ISR). We conclude that the resulting high-velocity anomalies, which reach down to the Moho, are most likely related to intrusives associated with the Deccan Traps. Low velocity anomalies are found in sediment-filled Mesozoic rift basins and are related to weakened zones of faults and fracturing. A low-velocity anomaly in the north of the region coincides with the seismogenic zone of the reactivated Kachchh rift system, which is apparently associated with the channel of the outpouring of Deccan basalt.

  15. Contrasts in lithospheric structure within the Australian craton—insights from surface wave tomography

    NASA Astrophysics Data System (ADS)

    Fishwick, S.; Kennett, B. L. N.; Reading, A. M.

    2005-03-01

    Contrasts in the seismic structure of the lithosphere within and between elements of the Australian Craton are imaged using surface wave tomography. New data from the WACRATON and TIGGER experiments are integrated with re-processed data from previous temporary deployments of broad-band seismometers and permanent seismic stations. The much improved path coverage in critical regions allows an interpretation of structures in the west of Australia, and a detailed comparison between different cratonic regions. Improvements to the waveform inversion procedure and a new multi-scale tomographic method increase the reliability of the tomographic images. In the shallowest part of the model (75 km) a region of lowered velocity is imaged beneath central Australia, and confirmed by the delayed arrival times of body waves for short paths. Within the cratonic lithosphere there is clearly structure at scale lengths of a few hundred kilometres; resolution tests indicate that path coverage within the continent is sufficient to reveal features of this size in the upper part of our model. In Western Australia, differences are seen beneath and within the Archaean cratons: at depths greater than 150 km faster velocities are imaged beneath the Yilgarn Craton than beneath the Pilbara Craton. In the complex North Australian Craton a fast wavespeed anomaly continuing to at least 250 km is observed below parts of the craton, suggesting the possibility of Archaean lithosphere underlying areas of dominantly Proterozoic surface geology.

  16. An approach to quantum-computational hydrologic inverse analysis

    DOE PAGES

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  17. An approach to quantum-computational hydrologic inverse analysis.

    PubMed

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.

  18. Coupling of Large Amplitude Inversion with Other States

    NASA Astrophysics Data System (ADS)

    Pearson, John; Yu, Shanshan

    2016-06-01

    The coupling of a large amplitude motion with a small amplitude vibration remains one of the least well characterized problems in molecular physics. Molecular inversion poses a few unique and not intuitively obvious challenges to the large amplitude motion problem. In spite of several decades of theoretical work numerous challenges in calculation of transition frequencies and more importantly intensities persist. The most challenging aspect of this problem is that the inversion coordinate is a unique function of the overall vibrational state including both the large and small amplitude modes. As a result, the r-axis system and the meaning of the K-quantum number in the rotational basis set are unique to each vibrational state of large or small amplitude motion. This unfortunate reality has profound consequences to calculation of intensities and the coupling of nearly degenerate vibrational states. The case of NH3 inversion and inversion through a plane of symmetry in alcohols will be examined to find a general path forward.

  19. An approach to quantum-computational hydrologic inverse analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Daniel

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  20. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  1. Direct and inverse problems of studying the properties of multilayer nanostructures based on a two-dimensional model of X-ray reflection and scattering

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2014-06-01

    A mathematical model of X-ray reflection and scattering by multilayered nanostructures in the quasi-optical approximation is proposed. X-ray propagation and the electric field distribution inside the multilayered structure are considered with allowance for refraction, which is taken into account via the second derivative with respect to the depth of the structure. This model is used to demonstrate the possibility of solving inverse problems in order to determine the characteristics of irregularities not only over the depth (as in the one-dimensional problem) but also over the length of the structure. An approximate combinatorial method for system decomposition and composition is proposed for solving the inverse problems.

  2. a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.

    2017-12-01

    We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.

  3. Seismic waveform inversion best practices: regional, global and exploration test cases

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan; Tromp, Jeroen

    2016-09-01

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.

  4. Quantifying phosphoric acid in high-temperature polymer electrolyte fuel cell components by X-ray tomographic microscopy.

    PubMed

    Eberhardt, S H; Marone, F; Stampanoni, M; Büchi, F N; Schmidt, T J

    2014-11-01

    Synchrotron-based X-ray tomographic microscopy is investigated for imaging the local distribution and concentration of phosphoric acid in high-temperature polymer electrolyte fuel cells. Phosphoric acid fills the pores of the macro- and microporous fuel cell components. Its concentration in the fuel cell varies over a wide range (40-100 wt% H3PO4). This renders the quantification and concentration determination challenging. The problem is solved by using propagation-based phase contrast imaging and a referencing method. Fuel cell components with known acid concentrations were used to correlate greyscale values and acid concentrations. Thus calibration curves were established for the gas diffusion layer, catalyst layer and membrane in a non-operating fuel cell. The non-destructive imaging methodology was verified by comparing image-based values for acid content and concentration in the gas diffusion layer with those from chemical analysis.

  5. Gadolinium-enhanced computed tomographic angiography: current status.

    PubMed

    Rosioreanu, Alex; Alberico, Ronald A; Litwin, Alan; Hon, Man; Grossman, Zachary D; Katz, Douglas S

    2005-01-01

    This article reviews the research to date, as well as our clinical experience from two institutions, on gadolinium-enhanced computed tomographic angiography (gCTA) for imaging the body. gCTA may be an appropriate examination for the small percentage of patients who would benefit from noninvasive vascular imaging, but who have contraindications to both iodinated contrast and magnetic resonance imaging. gCTA is more expensive than CTA with iodinated contrast, due to the dose of gadolinium administered, and gCTA has limitations compared with CTA with iodinated contrast, in that parenchymal organs are not optimally enhanced at doses of 0.5 mmol/kg or lower. However, in our experience, gCTA has been a very useful problem-solving examination in carefully selected patients. With the advent of 16-64 detector CT, in combination with bolus tracking, we believe that the overall dose of gadolinium needed for diagnostic CTA examinations, while relatively high, can be safely administered.

  6. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  7. Control and System Theory, Optimization, Inverse and Ill-Posed Problems

    DTIC Science & Technology

    1988-09-14

    Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The

  8. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  9. Deconvolution using a neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  10. Genetics Home Reference: Koolen-de Vries syndrome

    MedlinePlus

    ... of Koolen-de Vries syndrome , has undergone an inversion . An inversion involves two breaks in a chromosome; the resulting ... lineage have no health problems related to the inversion. However, genetic material can be lost or duplicated ...

  11. A Multi-scale Finite-frequency Approach to the Inversion of Reciprocal Travel Times for 3-D Velocity Structure beneath Taiwan

    NASA Astrophysics Data System (ADS)

    Chang, Y.; Hung, S.; Kuo, B.; Kuochen, H.

    2012-12-01

    Taiwan is one of the archetypical places for studying the active orogenic process in the world, where the Luzon arc has obliquely collided into the southwest China continental margin since 5 Ma ago. Because of the lack of convincing evidence for the structure in the lithospheric mantle and at even greater depths, several competing models have been proposed for the Taiwan mountain-building process. With the deployment of ocean-bottom seismometers (OBSs) on the seafloor around Taiwan from the TAIGER (TAiwan Integrated GEodynamic Research) and IES seismic experiments, the aperture of the seismic network is greatly extended to improve the depth resolution of tomographic imaging, which is critical to illuminate the nature of the arc-continent collision and accretion in Taiwan. In this study, we use relative travel-time residuals between a collection of teleseismic body wave arrivals to tomographically image the velocity structure beneath Taiwan. In addition to those from common distant earthquakes observed across an array of stations, we take advantage of dense seismicity in the vicinity of Taiwan and the source and receiver reciprocity to augment the data coverage from clustered earthquakes recorded by global stations. As waveforms are dependent of source mechanisms, we carry out the cluster analysis to group the phase arrivals with similar waveforms into clusters and simultaneously determine relative travel-time anomalies in the same cluster accurately by a cross correlation method. The combination of these two datasets would particularly enhance the resolvability of the tomographic models offshore of eastern Taiwan, where the two subduction systems of opposite polarity are taking place and have primarily shaped the present tectonic framework of Taiwan. On the other hand, our inversion adopts an innovation that invokes wavelet-based, multi-scale parameterization and finite-frequency theory. Not only does this approach make full use of frequency-dependent travel-time data providing different, but complementary sensitivity to velocity heterogeneity, but it also objectively addresses the intrinsically multi-scale characters of unevenly distributed data which yields the model with spatially-varying, data-adaptive resolution. Besides, we employ a parallelized singular value decomposition algorithm to directly solve for the resolution matrix and point spread functions (PSF). While the spatial distribution of a PSF is considered as the probability density function of multivariate normal distribution, we employ the principal component analysis (PCA) to estimate the lengths and directions of the principal axes of the PSF distribution, used for quantitative assessment of the resolvable scale-length and degree of smearing of the model and guidance to interpret the robust and trustworthy features in the resolved models.

  12. Three-dimensional inversion of multisource array electromagnetic data

    NASA Astrophysics Data System (ADS)

    Tartaras, Efthimios

    Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.

  13. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    NASA Astrophysics Data System (ADS)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  14. Inverse scattering transform and soliton classification of the coupled modified Korteweg-de Vries equation

    NASA Astrophysics Data System (ADS)

    Wu, Jianping; Geng, Xianguo

    2017-12-01

    The inverse scattering transform of the coupled modified Korteweg-de Vries equation is studied by the Riemann-Hilbert approach. In the direct scattering process, the spectral analysis of the Lax pair is performed, from which a Riemann-Hilbert problem is established for the equation. In the inverse scattering process, by solving Riemann-Hilbert problems corresponding to the reflectionless cases, three types of multi-soliton solutions are obtained. The multi-soliton classification is based on the zero structures of the Riemann-Hilbert problem. In addition, some figures are given to illustrate the soliton characteristics of the coupled modified Korteweg-de Vries equation.

  15. Individual differences in children's understanding of inversion and arithmetical skill.

    PubMed

    Gilmore, Camilla K; Bryant, Peter

    2006-06-01

    Background and aims. In order to develop arithmetic expertise, children must understand arithmetic principles, such as the inverse relationship between addition and subtraction, in addition to learning calculation skills. We report two experiments that investigate children's understanding of the principle of inversion and the relationship between their conceptual understanding and arithmetical skills. A group of 127 children from primary schools took part in the study. The children were from 2 age groups (6-7 and 8-9 years). Children's accuracy on inverse and control problems in a variety of presentation formats and in canonical and non-canonical forms was measured. Tests of general arithmetic ability were also administered. Children consistently performed better on inverse than control problems, which indicates that they could make use of the inverse principle. Presentation format affected performance: picture presentation allowed children to apply their conceptual understanding flexibly regardless of the problem type, while word problems restricted their ability to use their conceptual knowledge. Cluster analyses revealed three subgroups with different profiles of conceptual understanding and arithmetical skill. Children in the 'high ability' and 'low ability' groups showed conceptual understanding that was in-line with their arithmetical skill, whilst a 3rd group of children had more advanced conceptual understanding than arithmetical skill. The three subgroups may represent different points along a single developmental path or distinct developmental paths. The discovery of the existence of the three groups has important consequences for education. It demonstrates the importance of considering the pattern of individual children's conceptual understanding and problem-solving skills.

  16. On stability of the solutions of inverse problem for determining the right-hand side of a degenerate parabolic equation with two independent variables

    NASA Astrophysics Data System (ADS)

    Kamynin, V. L.; Bukharova, T. I.

    2017-01-01

    We prove the estimates of stability with respect to perturbations of input data for the solutions of inverse problems for degenerate parabolic equations with unbounded coefficients. An important feature of these estimates is that the constants in these estimates are written out explicitly by the input data of the problem.

  17. THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)

    EPA Science Inventory

    This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...

  18. ON THE GEOSTATISTICAL APPROACH TO THE INVERSE PROBLEM. (R825689C037)

    EPA Science Inventory

    Abstract

    The geostatistical approach to the inverse problem is discussed with emphasis on the importance of structural analysis. Although the geostatistical approach is occasionally misconstrued as mere cokriging, in fact it consists of two steps: estimation of statist...

  19. On a local solvability and stability of the inverse transmission eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Bondarenko, Natalia; Buterin, Sergey

    2017-11-01

    We prove a local solvability and stability of the inverse transmission eigenvalue problem posed by McLaughlin and Polyakov (1994 J. Diff. Equ. 107 351-82). In particular, this result establishes the minimality of the data used therein. The proof is constructive.

  20. Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework

    PubMed Central

    Dunkerley, David A. P.; Tomkowiak, Michael T.; Slagowski, Jordan M.; McCabe, Bradley P.; Funk, Tobias; Speidel, Michael A.

    2015-01-01

    Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8–6.4% (18.6–31.5 cm acrylic, 100 kV), versus 2.1–4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems. PMID:26113765

Top