Sample records for tomographic inversion algorithms

  1. Tomography and the Herglotz-Wiechert inverse formulation

    NASA Astrophysics Data System (ADS)

    Nowack, Robert L.

    1990-04-01

    In this paper, linearized tomography and the Herglotz-Wiechert inverse formulation are compared. Tomographic inversions for 2-D or 3-D velocity structure use line integrals along rays and can be written in terms of Radon transforms. For radially concentric structures, Radon transforms are shown to reduce to Abel transforms. Therefore, for straight ray paths, the Abel transform of travel-time is a tomographic algorithm specialized to a one-dimensional radially concentric medium. The Herglotz-Wiechert formulation uses seismic travel-time data to invert for one-dimensional earth structure and is derived using exact ray trajectories by applying an Abel transform. This is of historical interest since it would imply that a specialized tomographic-like algorithm has been used in seismology since the early part of the century (see Herglotz, 1907; Wiechert, 1910). Numerical examples are performed comparing the Herglotz-Wiechert algorithm and linearized tomography along straight rays. Since the Herglotz-Wiechert algorithm is applicable under specific conditions, (the absence of low velocity zones) to non-straight ray paths, the association with tomography may prove to be useful in assessing the uniqueness of tomographic results generalized to curved ray geometries.

  2. A multiresolution inversion for imaging the ionosphere

    NASA Astrophysics Data System (ADS)

    Yin, Ping; Zheng, Ya-Nan; Mitchell, Cathryn N.; Li, Bo

    2017-06-01

    Ionospheric tomography has been widely employed in imaging the large-scale ionospheric structures at both quiet and storm times. However, the tomographic algorithms to date have not been very effective in imaging of medium- and small-scale ionospheric structures due to limitations of uneven ground-based data distributions and the algorithm itself. Further, the effect of the density and quantity of Global Navigation Satellite Systems data that could help improve the tomographic results for the certain algorithm remains unclear in much of the literature. In this paper, a new multipass tomographic algorithm is proposed to conduct the inversion using intensive ground GPS observation data and is demonstrated over the U.S. West Coast during the period of 16-18 March 2015 which includes an ionospheric storm period. The characteristics of the multipass inversion algorithm are analyzed by comparing tomographic results with independent ionosonde data and Center for Orbit Determination in Europe total electron content estimates. Then, several ground data sets with different data distributions are grouped from the same data source in order to investigate the impact of the density of ground stations on ionospheric tomography results. Finally, it is concluded that the multipass inversion approach offers an improvement. The ground data density can affect tomographic results but only offers improvements up to a density of around one receiver every 150 to 200 km. When only GPS satellites are tracked there is no clear advantage in increasing the density of receivers beyond this level, although this may change if multiple constellations are monitored from each receiving station in the future.

  3. Tomographic inversion of satellite photometry

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Hays, P. B.; Abreu, V. J.

    1984-01-01

    An inversion algorithm capable of reconstructing the volume emission rate of thermospheric airglow features from satellite photometry has been developed. The accuracy and resolution of this technique are investigated using simulated data, and the inversions of several sets of observations taken by the Visible Airglow Experiment are presented.

  4. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  5. The analysis of a rocket tomography measurement of the N2+3914A emission and N2 ionization rates in an auroral arc

    NASA Technical Reports Server (NTRS)

    Mcdade, Ian C.

    1991-01-01

    Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.

  6. Creating realistic models and resolution assessment in tomographic inversion of wide-angle active seismic profiling data

    NASA Astrophysics Data System (ADS)

    Stupina, T.; Koulakov, I.; Kopp, H.

    2009-04-01

    We consider questions of creating structural models and resolution assessment in tomographic inversion of wide-angle active seismic profiling data. For our investigations, we use the PROFIT (Profile Forward and Inverse Tomographic modeling) algorithm which was tested earlier with different datasets. Here we consider offshore seismic profiling data from three areas (Chile, Java and Central Pacific). Two of the study areas are characterized by subduction zones whereas the third data set covers a seamount province. We have explored different algorithmic issues concerning the quality of the solution, such as (1) resolution assessment using different sizes and complexity of synthetic anomalies; (2) grid spacing effects; (3) amplitude damping and smoothing; (4) criteria for rejection of outliers; (5) quantitative criteria for comparing models. Having determined optimal algorithmic parameters for the observed seismic profiling data we have created structural synthetic models which reproduce the results of the observed data inversion. For the Chilean and Java subduction zones our results show similar patterns: a relatively thin sediment layer on the oceanic plate, thicker inhomogeneous sediments in the overlying plate and a large area of very strong low velocity anomalies in the accretionary wedge. For two seamounts in the Pacific we observe high velocity anomalies in the crust which can be interpreted as frozen channels inside the dormant volcano cones. Along both profiles we obtain considerable crustal thickening beneath the seamounts.

  7. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  8. Nonlinear inversion of borehole-radar tomography data to reconstruct velocity and attenuation distribution in earth materials

    USGS Publications Warehouse

    Zhou, C.; Liu, L.; Lane, J.W.

    2001-01-01

    A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.

  9. A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy

    PubMed Central

    Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.

    2017-01-01

    One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997

  10. Direct integration of the inverse Radon equation for X-ray computed tomography.

    PubMed

    Libin, E E; Chakhlov, S V; Trinca, D

    2016-11-22

    A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.

  11. Estimating crustal heterogeneity from double-difference tomography

    USGS Publications Warehouse

    Got, J.-L.; Monteiller, V.; Virieux, J.; Okubo, P.

    2006-01-01

    Seismic velocity parameters in limited, but heterogeneous volumes can be inferred using a double-difference tomographic algorithm, but to obtain meaningful results accuracy must be maintained at every step of the computation. MONTEILLER et al. (2005) have devised a double-difference tomographic algorithm that takes full advantage of the accuracy of cross-spectral time-delays of large correlated event sets. This algorithm performs an accurate computation of theoretical travel-time delays in heterogeneous media and applies a suitable inversion scheme based on optimization theory. When applied to Kilauea Volcano, in Hawaii, the double-difference tomography approach shows significant and coherent changes to the velocity model in the well-resolved volumes beneath the Kilauea caldera and the upper east rift. In this paper, we first compare the results obtained using MONTEILLER et al.'s algorithm with those obtained using the classic travel-time tomographic approach. Then, we evaluated the effect of using data series of different accuracies, such as handpicked arrival-time differences ("picking differences"), on the results produced by double-difference tomographic algorithms. We show that picking differences have a non-Gaussian probability density function (pdf). Using a hyperbolic secant pdf instead of a Gaussian pdf allows improvement of the double-difference tomographic result when using picking difference data. We completed our study by investigating the use of spatially discontinuous time-delay data. ?? Birkha??user Verlag, Basel, 2006.

  12. SSULI/SSUSI UV Tomographic Images of Large-Scale Plasma Structuring

    NASA Astrophysics Data System (ADS)

    Hei, M. A.; Budzien, S. A.; Dymond, K.; Paxton, L. J.; Schaefer, R. K.; Groves, K. M.

    2015-12-01

    We present a new technique that creates tomographic reconstructions of atmospheric ultraviolet emission based on data from the Special Sensor Ultraviolet Limb Imager (SSULI) and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI), both flown on the Defense Meteorological Satellite Program (DMSP) Block 5D3 series satellites. Until now, the data from these two instruments have been used independently of each other. The new algorithm combines SSULI/SSUSI measurements of 135.6 nm emission using the tomographic technique; the resultant data product - whole-orbit reconstructions of atmospheric volume emission within the satellite orbital plane - is substantially improved over the original data sets. Tests using simulated atmospheric emission verify that the algorithm performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. A comparison with ALTAIR radar data validates that the volume emission reconstructions can be inverted to yield maps of electron density. The algorithm incorporates several innovative new features, including the use of both SSULI and SSUSI data to create tomographic reconstructions, the use of an inversion algorithm (Richardson-Lucy; RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, and a pseudo-diffusion based regularization scheme implemented between iterations of the RL code. The algorithm also explicitly accounts for extinction due to absorption by molecular oxygen.

  13. Blind test of methods for obtaining 2-D near-surface seismic velocity models from first-arrival traveltimes

    USGS Publications Warehouse

    Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.

    2013-01-01

    Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.

  14. Tomographic Neutron Imaging using SIRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregor, Jens; FINNEY, Charles E A; Toops, Todd J

    2013-01-01

    Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.

  15. Simultaneous elastic parameter inversion in 2-D/3-D TTI medium combined later arrival times

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; Wang, Tao; Yang, Shang-bei; Li, Xing-wang; Huang, Guo-jiao

    2016-04-01

    Traditional traveltime inversion for anisotropic medium is, in general, based on a "weak" assumption in the anisotropic property, which simplifies both the forward part (ray tracing is performed once only) and the inversion part (a linear inversion solver is possible). But for some real applications, a general (both "weak" and "strong") anisotropic medium should be considered. In such cases, one has to develop a ray tracing algorithm to handle with the general (including "strong") anisotropic medium and also to design a non-linear inversion solver for later tomography. Meanwhile, it is constructive to investigate how much the tomographic resolution can be improved by introducing the later arrivals. For this motivation, we incorporated our newly developed ray tracing algorithm (multistage irregular shortest-path method) for general anisotropic media with a non-linear inversion solver (a damped minimum norm, constrained least squares problem with a conjugate gradient approach) to formulate a non-linear inversion solver for anisotropic medium. This anisotropic traveltime inversion procedure is able to combine the later (reflected) arrival times. Both 2-D/3-D synthetic inversion experiments and comparison tests show that (1) the proposed anisotropic traveltime inversion scheme is able to recover the high contrast anomalies and (2) it is possible to improve the tomographic resolution by introducing the later (reflected) arrivals, but not as expected in the isotropic medium, because the different velocity (qP, qSV and qSH) sensitivities (or derivatives) respective to the different elastic parameters are not the same but are also dependent on the inclination angle.

  16. Entropy-Bayesian Inversion of Time-Lapse Tomographic GPR data for Monitoring Dielectric Permittivity and Soil Moisture Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Z; Terry, N; Hubbard, S S

    2013-02-12

    In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less

  17. Entropy-Bayesian Inversion of Time-Lapse Tomographic GPR data for Monitoring Dielectric Permittivity and Soil Moisture Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.

    2013-02-22

    In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less

  18. Joint body and surface wave tomography applied to the Toba caldera complex (Indonesia)

    NASA Astrophysics Data System (ADS)

    Jaxybulatov, Kairly; Koulakov, Ivan; Shapiro, Nikolai

    2016-04-01

    We developed a new algorithm for a joint body and surface wave tomography. The algorithm is a modification of the existing LOTOS code (Koulakov, 2009) developed for local earthquake tomography. The input data for the new method are travel times of P and S waves and dispersion curves of Rayleigh and Love waves. The main idea is that the two data types have complementary sensitivities. The body-wave data have good resolution at depth, where we have enough crossing rays between sources and receivers, whereas the surface waves have very good near-surface resolution. The surface wave dispersion curves can be retrieved from the correlations of the ambient seismic noise and in this case the sampled path distribution does not depend on the earthquake sources. The contributions of the two data types to the inversion are controlled by the weighting of the respective equations. One of the clearest cases where such approach may be useful are volcanic systems in subduction zones with their complex magmatic feeding systems that have deep roots in the mantle and intermediate magma chambers in the crust. In these areas, the joint inversion of different types of data helps us to build a comprehensive understanding of the entire system. We apply our algorithm to data collected in the region surrounding the Toba caldera complex (north Sumatra, Indonesia) during two temporary seismic experiments (IRIS, PASSCAL, 1995, GFZ, LAKE TOBA, 2008). We invert 6644 P and 5240 S wave arrivals and ~500 group velocity dispersion curves of Rayleigh and Love waves. We present a series of synthetic tests and real data inversions which show that joint inversion approach gives more reliable results than the separate inversion of two data types. Koulakov, I., LOTOS code for local earthquake tomographic inversion. Benchmarks for testing tomographic algorithms, Bull. seism. Soc. Am., 99(1), 194-214, 2009, doi:10.1785/0120080013

  19. Tomographic phase microscopy: principles and applications in bioimaging [Invited

    PubMed Central

    Jin, Di; Zhou, Renjie; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    Tomographic phase microscopy (TPM) is an emerging optical microscopic technique for bioimaging. TPM uses digital holographic measurements of complex scattered fields to reconstruct three-dimensional refractive index (RI) maps of cells with diffraction-limited resolution by solving inverse scattering problems. In this paper, we review the developments of TPM from the fundamental physics to its applications in bioimaging. We first provide a comprehensive description of the tomographic reconstruction physical models used in TPM. The RI map reconstruction algorithms and various regularization methods are discussed. Selected TPM applications for cellular imaging, particularly in hematology, are reviewed. Finally, we examine the limitations of current TPM systems, propose future solutions, and envision promising directions in biomedical research. PMID:29386746

  20. Validation of Special Sensor Ultraviolet Limb Imager (SSULI) Ionospheric Tomography using ALTAIR Incoherent Scatter Radar Measurements

    NASA Astrophysics Data System (ADS)

    Dymond, K.; Nicholas, A. C.; Budzien, S. A.; Stephan, A. W.; Coker, C.; Hei, M. A.; Groves, K. M.

    2015-12-01

    The Special Sensor Ultraviolet Limb Imager (SSULI) instruments are ultraviolet limb scanning sensors flying on the Defense Meteorological Satellite Program (DMSP) satellites. The SSULIs observe the 80-170 nanometer wavelength range covering emissions at 91 and 136 nm, which are produced by radiative recombination of the ionosphere. We invert these emissions tomographically using newly developed algorithms that include optical depth effects due to pure absorption and resonant scattering. We present the details of our approach including how the optimal altitude and along-track sampling were determined and the newly developed approach we are using for regularizing the SSULI tomographic inversions. Finally, we conclude with validations of the SSULI inversions against ALTAIR incoherent scatter radar measurements and demonstrate excellent agreement between the measurements.

  1. Singular value decomposition for the truncated Hilbert transform

    NASA Astrophysics Data System (ADS)

    Katsevich, A.

    2010-11-01

    Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.

  2. Seismic tomography of the southern California crust based on spectral-element and adjoint methods

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Maggi, Alessia; Tromp, Jeroen

    2010-01-01

    We iteratively improve a 3-D tomographic model of the southern California crust using numerical simulations of seismic wave propagation based on a spectral-element method (SEM) in combination with an adjoint method. The initial 3-D model is provided by the Southern California Earthquake Center. The data set comprises three-component seismic waveforms (i.e. both body and surface waves), filtered over the period range 2-30 s, from 143 local earthquakes recorded by a network of 203 stations. Time windows for measurements are automatically selected by the FLEXWIN algorithm. The misfit function in the tomographic inversion is based on frequency-dependent multitaper traveltime differences. The gradient of the misfit function and related finite-frequency sensitivity kernels for each earthquake are computed using an adjoint technique. The kernels are combined using a source subspace projection method to compute a model update at each iteration of a gradient-based minimization algorithm. The inversion involved 16 iterations, which required 6800 wavefield simulations. The new crustal model, m16, is described in terms of independent shear (VS) and bulk-sound (VB) wave speed variations. It exhibits strong heterogeneity, including local changes of +/-30 per cent with respect to the initial 3-D model. The model reveals several features that relate to geological observations, such as sedimentary basins, exhumed batholiths, and contrasting lithologies across faults. The quality of the new model is validated by quantifying waveform misfits of full-length seismograms from 91 earthquakes that were not used in the tomographic inversion. The new model provides more accurate synthetic seismograms that will benefit seismic hazard assessment.

  3. Adjoint Tomography of the Southern California Crust (Invited) (Invited)

    NASA Astrophysics Data System (ADS)

    Tape, C.; Liu, Q.; Maggi, A.; Tromp, J.

    2009-12-01

    We iteratively improve a three-dimensional tomographic model of the southern California crust using numerical simulations of seismic wave propagation based on a spectral-element method (SEM) in combination with an adjoint method. The initial 3D model is provided by the Southern California Earthquake Center. The dataset comprises three-component seismic waveforms (i.e. both body and surface waves), filtered over the period range 2-30 s, from 143 local earthquakes recorded by a network of 203 stations. Time windows for measurements are automatically selected by the FLEXWIN algorithm. The misfit function in the tomographic inversion is based on frequency-dependent multitaper traveltime differences. The gradient of the misfit function and related finite-frequency sensitivity kernels for each earthquake are computed using an adjoint technique. The kernels are combined using a source subspace projection method to compute a model update at each iteration of a gradient-based minimization algorithm. The inversion involved 16 iterations, which required 6800 wavefield simulations and a total of 0.8 million CPU hours. The new crustal model, m16, is described in terms of independent shear (Vs) and bulk-sound (Vb) wavespeed variations. It exhibits strong heterogeneity, including local changes of ±30% with respect to the initial 3D model. The model reveals several features that relate to geologic observations, such as sedimentary basins, exhumed batholiths, and contrasting lithologies across faults. The quality of the new model is validated by quantifying waveform misfits of full-length seismograms from 91 earthquakes that were not used in the tomographic inversion. The new model provides more accurate synthetic seismograms that will benefit seismic hazard assessment.

  4. Teleseismic tomography for imaging Earth's upper mantle

    NASA Astrophysics Data System (ADS)

    Aktas, Kadircan

    Teleseismic tomography is an important imaging tool in earthquake seismology, used to characterize lithospheric structure beneath a region of interest. In this study I investigate three different tomographic techniques applied to real and synthetic teleseismic data, with the aim of imaging the velocity structure of the upper mantle. First, by applying well established traveltime tomographic techniques to teleseismic data from southern Ontario, I obtained high-resolution images of the upper mantle beneath the lower Great Lakes. Two salient features of the 3D models are: (1) a patchy, NNW-trending low-velocity region, and (2) a linear, NE-striking high-velocity anomaly. I interpret the high-velocity anomaly as a possible relict slab associated with ca. 1.25 Ga subduction, whereas the low-velocity anomaly is interpreted as a zone of alteration and metasomatism associated with the ascent of magmas that produced the Late Cretaceous Monteregian plutons. The next part of the thesis is concerned with adaptation of existing full-waveform tomographic techniques for application to teleseismic body-wave observations. The method used here is intended to be complementary to traveltime tomography, and to take advantage of efficient frequency-domain methodologies that have been developed for inverting large controlled-source datasets. Existing full-waveform acoustic modelling and inversion codes have been modified to handle plane waves impinging from the base of the lithospheric model at a known incidence angle. A processing protocol has been developed to prepare teleseismic observations for the inversion algorithm. To assess the validity of the acoustic approximation, the processing procedure and modelling-inversion algorithm were tested using synthetic seismograms computed using an elastic Kirchhoff integral method. These tests were performed to evaluate the ability of the frequency-domain full-waveform inversion algorithm to recover topographic variations of the Moho under a variety of realistic scenarios. Results show that frequency-domain full-waveform tomography is generally successful in recovering both sharp and discontinuous features. Thirdly, I developed a new method for creating an initial background velocity model for the inversion algorithm, which is sufficiently close to the true model so that convergence is likely to be achieved. I adapted a method named Deformable Layer Tomography (DLT), which adjusts interfaces between layers rather than velocities within cells. I applied this method to a simple model comprising a single uniform crustal layer and a constant-velocity mantle, separated by an irregular Moho interface. A series of tests was performed to evaluate the sensitivity of the DLT algorithm; the results show that my algorithm produces useful results within a realistic range of incident-wave obliquity, incidence angle and signal-to-noise level. Keywords. Teleseismic tomography, full waveform tomography, deformable layer tomography, lower Great Lakes, crust and upper mantle.

  5. Frequency-domain optical tomographic image reconstruction algorithm with the simplified spherical harmonics (SP3) light propagation model.

    PubMed

    Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H

    2017-06-01

    We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .

  6. Automatic alignment for three-dimensional tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.

    2018-02-01

    In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.

  7. Fast tomographic methods for the tokamak ISTTOK

    NASA Astrophysics Data System (ADS)

    Carvalho, P. J.; Thomsen, H.; Gori, S.; Toussaint, U. v.; Weller, A.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.

    2008-04-01

    The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.

  8. Three-dimensional optical tomographic imaging of supersonic jets through inversion of phase data obtained through the transport-of-intensity equation.

    PubMed

    Hemanth, Thayyullathil; Rajesh, Langoju; Padmaram, Renganathan; Vasu, R Mohan; Rajan, Kanjirodan; Patnaik, Lalit M

    2004-07-20

    We report experimental results of quantitative imaging in supersonic circular jets by using a monochromatic light probe. An expanding cone of light interrogates a three-dimensional volume of a supersonic steady-state flow from a circular jet. The distortion caused to the spherical wave by the presence of the jet is determined through our measuring normal intensity transport. A cone-beam tomographic algorithm is used to invert wave-front distortion to changes in refractive index introduced by the flow. The refractive index is converted into density whose cross sections reveal shock and other characteristics of the flow.

  9. Anisotropic S-wave velocity structure from joint inversion of surface wave group velocity dispersion: A case study from India

    NASA Astrophysics Data System (ADS)

    Mitra, S.; Dey, S.; Siddartha, G.; Bhattacharya, S.

    2016-12-01

    We estimate 1-dimensional path average fundamental mode group velocity dispersion curves from regional Rayleigh and Love waves sampling the Indian subcontinent. The path average measurements are combined through a tomographic inversion to obtain 2-dimensional group velocity variation maps between periods of 10 and 80 s. The region of study is parametrised as triangular grids with 1° sides for the tomographic inversion. Rayleigh and Love wave dispersion curves from each node point is subsequently extracted and jointly inverted to obtain a radially anisotropic shear wave velocity model through global optimisation using Genetic Algorithm. The parametrization of the model space is done using three crustal layers and four mantle layers over a half-space with varying VpH , VsV and VsH. The anisotropic parameter (η) is calculated from empirical relations and the density of the layers are taken from PREM. Misfit for the model is calculated as a sum of error-weighted average dispersion curves. The 1-dimensional anisotropic shear wave velocity at each node point is combined using linear interpolation to obtain 3-dimensional structure beneath the region. Synthetic tests are performed to estimate the resolution of the tomographic maps which will be presented with our results. We envision to extend this to a larger dataset in near future to obtain high resolution anisotrpic shear wave velocity structure beneath India, Himalaya and Tibet.

  10. Finite frequency shear wave splitting tomography: a model space search approach

    NASA Astrophysics Data System (ADS)

    Mondal, P.; Long, M. D.

    2017-12-01

    Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.

  11. Canopy Height and Vertical Structure from Multibaseline Polarimetric InSAR: First Results of the 2016 NASA/ESA AfriSAR Campaign

    NASA Astrophysics Data System (ADS)

    Lavalle, M.; Hensley, S.; Lou, Y.; Saatchi, S. S.; Pinto, N.; Simard, M.; Fatoyinbo, T. E.; Duncanson, L.; Dubayah, R.; Hofton, M. A.; Blair, J. B.; Armston, J.

    2016-12-01

    In this paper we explore the derivation of canopy height and vertical structure from polarimetric-interferometric SAR (PolInSAR) data collected during the 2016 AfriSAR campaign in Gabon. AfriSAR is a joint effort between NASA and ESA to acquire multi-baseline L- and P-band radar data, lidar data and field data over tropical forests and savannah sites to support calibration, validation and algorithm development in preparation for the NISAR, GEDI and BIOMASS missions. Here we focus on the L-band UAVSAR dataset acquired over the Lope National Park in Central Gabon to demonstrate mapping of canopy height and vertical structure using PolInSAR and tomographic techniques. The Lope site features a natural gradient of forest biomass from the forest-savanna boundary (< 100 Mg/ha) to dense undisturbed humid tropical forests (> 400 Mg/ha). Our dataset includes 9 long-baseline, full-polarimetric UAVSAR acquisitions along with field and lidar data from the Laser Vegetation Ice Sensor (LVIS). We first present a brief theoretical background of the PolInSAR and tomographic techniques. We then show the results of our PolInSAR algorithms to create maps of canopy height generated via inversion of the random-volume-over-ground (RVOG) and random-motion-over-ground (RVoG) models. In our approach multiple interferometric baselines are merged incoherently to maximize the interferometric sensitivity over a broad range of tree heights. Finally we show how traditional tomographic algorithms are used for the retrieval of the full vertical canopy profile. We compare our results from the different PolInSAR/tomographic algorithms to validation data derived from lidar and field data.

  12. Finite-frequency tomography using adjoint methods-Methodology and examples using membrane surface waves

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Tromp, Jeroen

    2007-03-01

    We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.

  13. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pablant, N. A.; Bell, R. E.; Bitter, M.

    2014-11-15

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  14. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaksa)

    DOE PAGES

    Pablant, N. A.; Bell, R. E.; Bitter, M.; ...

    2014-08-08

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  15. Seismic structure of the upper crust in the Albertine Rift from travel-time and ambient-noise tomography - a comparison

    NASA Astrophysics Data System (ADS)

    Jakovlev, Andrey; Kaviani, Ayoub; Ruempker, Georg

    2017-04-01

    Here we present results of the investigation of the upper crust in the Albertine rift around the Rwenzori Mountains. We use a data set collected from a temporary network of 33 broadband stations operated by the RiftLink research group between September 2009 and August 2011. During this period, 82639 P-wave and 73408 S-wave travel times from 12419 local and regional earthquakes were registered. This presents a very rare opportunity to apply both local travel-time and ambient-noise tomography to analyze data from the same network. For the local travel-time tomographic inversion the LOTOS algorithm (Koulakov, 2009) was used. The algorithm performs iterative simultaneous inversions for 3D models of P- and S-velocity anomalies in combination with earthquake locations and origin times. 28955 P- and S-wave picks from 2769 local earthquakes were used. To estimate the resolution and stability of the results a number of the synthetic and real data tests were performed. To perform the ambient noise tomography we use the following procedure. First, we follow the standard procedure described by Bensen et al. (2007) as modified by Boué et al. (2014) to compute the vertical component cross-correlation functions between all pairs of stations. We also adapted the algorithm introduced by Boué et al. (2014) and use the WHISPER software package (Briand et al., 2013) to preprocess individual daily vertical-component waveforms. On the next step, for each period, we use the method of Barmin et al. (2001) to invert the dispersion measurements along each path for group velocity tomographic maps. Finally, we adapt a modified version of the algorithm suggested by Macquet et al. (2014) to invert the group velocity maps for shear velocity structure. We apply several tests, which show that the best resolution is obtained at a period of 8 seconds, which correspond to a depth of approximately 6 km. Models of the seismic structure obtained by the two methods agree well at shallow depth of about 5 km Low velocities surround the mountain range from western and southern sides and coincide with the location of the rift valley. The Rwenzori Mountains itself and the eastern rift shoulder are represented by increased velocities. At greater depths of 10 - 15 km some differences in the models care observed. Thus, beneath the Rwenzories the travel time tomography shows low S-velocities, whereas the ambient noise tomography exhibits high S-velocities. This can be possibly explained by the fact that the ambient noise tomography is characterized by higher vertical resolution. Also, the number of the rays used for tomographic inversion in the ambient noise tomography is significantly smaller. This study was partly supported by the grant of Russian Foundation of Science #14-17-00430. References: Barmin, M.P., Ritzwoller, M.H. & Levshin, A.L., 2001. A fast and reliable method for surface wave tomography, Pure appl. Geophys., 158, 1351-1375. Bensen G.D., Ritzwoller M.H., Barmin M.P., Levshin A.L., Lin F., Moschetti M.P., Shapiro N.M., Yang Y., 2001, A fast and reliable method for surface wave tomography. Geophys. J. Int. 169, 1239-1260, doi: 10.1111/j.1365-246X.2007.03374.x. Boué P., Poli P., Campillo M., Roux P., 2014, Reverberations, coda waves and ambient-noise: correlations at the global scale and retrieval of the deep phases. Earth planet. Sci. Lett., 391, 137-145. Briand X., Campillo M., Brenguier F., Boué P., Poli P., Roux P., Takeda T. AGU Fall Meeting. San Francisco, CA; 2013. Processing of terabytes of data for seismic noise analysis with the Python codes of the Whisper Suite. 9-13 December, in Proceedings of the , Abstract n°IN51B-1544. Koulakov, I. (2009), LOTOS code for local earthquake tomographic inversion. Benchmarks for testing tomographic algorithms, Bull. Seismol. Soc. Am., 99, 194-214, doi:10.1785/0120080013.

  16. Tomographic inversion of satellite photometry. II

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Hays, P. B.; Abreu, V. J.

    1985-01-01

    A method for combining nadir observations of emission features in the upper atmosphere with the result of a tomographic inversion of limb brightness measurements is presented. Simulated and actual results are provided, and error sensitivity is investigated.

  17. TomoPhantom, a software package to generate 2D-4D analytical phantoms for CT image reconstruction algorithm benchmarks

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.

    2018-01-01

    In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.

  18. Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization

    NASA Astrophysics Data System (ADS)

    Liu, Chuanming; Yao, Huajian

    2017-03-01

    Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.

  19. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  20. Ionospheric-thermospheric UV tomography: 2. Comparison with incoherent scatter radar measurements

    NASA Astrophysics Data System (ADS)

    Dymond, K. F.; Nicholas, A. C.; Budzien, S. A.; Stephan, A. W.; Coker, C.; Hei, M. A.; Groves, K. M.

    2017-03-01

    The Special Sensor Ultraviolet Limb Imager (SSULI) instruments are ultraviolet limb scanning sensors that fly on the Defense Meteorological Satellite Program F16-F19 satellites. The SSULIs cover the 80-170 nm wavelength range which contains emissions at 91 and 136 nm, which are produced by radiative recombination of the ionosphere. We invert the 91.1 nm emission tomographically using a newly developed algorithm that includes optical depth effects due to pure absorption and resonant scattering. We present the details of our approach including how the optimal altitude and along-track sampling were determined and the newly developed approach we are using for regularizing the SSULI tomographic inversions. Finally, we conclude with validations of the SSULI inversions against Advanced Research Project Agency Long-range Tracking and Identification Radar (ALTAIR) incoherent scatter radar measurements and demonstrate excellent agreement between the measurements. As part of this study, we include the effects of pure absorption by O2, N2, and O in the inversions and find that best agreement between the ALTAIR and SSULI measurements is obtained when only O2 and O are included, but the agreement degrades when N2 absorption is included. This suggests that the absorption cross section of N2 needs to be reinvestigated near 91.1 nm wavelengths.

  1. Towards Seismic Tomography Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Liu, Q.; Tape, C.; Maggi, A.

    2006-12-01

    We outline the theory behind tomographic inversions based on 3D reference models, fully numerical 3D wave propagation, and adjoint methods. Our approach involves computing the Fréchet derivatives for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a spectral-element method (SEM) and a heterogeneous wave-speed model, and stored as synthetic seismograms at particular receivers for which there is data. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the differences between the data and the synthetics are time reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernel. These kernels may be thought of as weighted sums of measurement-specific banana-donut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. A conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. Using 2D examples for Rayleigh wave phase-speed maps of southern California, we illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions, and joint source-structure inversions. We also illustrate the characteristics of these 3D finite-frequency kernels based upon adjoint simulations for a variety of global arrivals, e.g., Pdiff, P'P', and SKS, and we illustrate how the approach may be used to investigate body- and surface-wave anisotropy. In adjoint tomography any time segment in which the data and synthetics match reasonably well is suitable for measurement, and this implies a much greater number of phases per seismogram can be used compared to classical tomography in which the sensitivity of the measurements is determined analytically for specific arrivals, e.g., P. We use an automated picking algorithm based upon short-term/long-term averages and strict phase and amplitude anomaly criteria to determine arrivals and time windows suitable for measurement. For shallow global events the algorithm typically identifies of the order of 1000~windows suitable for measurement, whereas for a deep event the number can reach 4000. For southern California earthquakes the number of phases is of the order of 100 for a magnitude 4.0 event and up to 450 for a magnitude 5.0 event. We will show examples of event kernels for both global and regional earthquakes. These event kernels form the basis of adjoint tomography.

  2. Seismic tomographic imaging of P- and S-waves velocity perturbations in the upper mantle beneath Iran

    NASA Astrophysics Data System (ADS)

    Alinaghi, Alireza; Koulakov, Ivan; Thybo, Hans

    2007-06-01

    The inverse tomography method has been used to study the P- and S-waves velocity structure of the crust and upper mantle underneath Iran. The method, based on the principle of source-receiver reciprocity, allows for tomographic studies of regions with sparse distribution of seismic stations if the region has sufficient seismicity. The arrival times of body waves from earthquakes in the study area as reported in the ISC catalogue (1964-1996) at all available epicentral distances are used for calculation of residual arrival times. Prior to inversion we have relocated hypocentres based on a 1-D spherical earth's model taking into account variable crustal thickness and surface topography. During the inversion seismic sources are further relocated simultaneously with the calculation of velocity perturbations. With a series of synthetic tests we demonstrate the power of the algorithm and the data to reconstruct introduced anomalies using the ray paths of the real data set and taking into account the measurement errors and outliers. The velocity anomalies show that the crust and upper mantle beneath the Iranian Plateau comprises a low velocity domain between the Arabian Plate and the Caspian Block. This is in agreement with global tomographic models, and also tectonic models, in which active Iranian plateau is trapped between the stable Turan plate in the north and the Arabian shield in the south. Our results show clear evidence of the mainly aseismic subduction of the oceanic crust of the Oman Sea underneath the Iranian Plateau. However, along the Zagros suture zone, the subduction pattern is more complex than at Makran where the collision of the two plates is highly seismic.

  3. Tracking tracer breakthrough in the hyporheic zone using time‐lapse DC resistivity, Crabby Creek, Pennsylvania

    USGS Publications Warehouse

    Nyquist, Jonathan E.; Toran, Laura; Fang, Allison C.; Ryan, Robert J.; Rosenberry, Donald O.

    2010-01-01

    Characterization of the hyporheic zone is of critical importance for understanding stream ecology, contaminant transport, and groundwater‐surface water interaction. A salt water tracer test was used to probe the hyporheic zone of a recently re‐engineered portion of Crabby Creek, a stream located near Philadelphia, PA. The tracer solution was tracked through a 13.5 meter segment of the stream using both a network of 25 wells sampled every 5–15 minutes and time‐lapse electrical resistivity tomographs collected every 11 minutes for six hours, with additional tomographs collected every 100 minutes for an additional 16 hours. The comparison of tracer monitoring methods is of keen interest because tracer tests are one of the few techniques available for characterizing this dynamic zone, and logistically it is far easier to collect resistivity tomographs than to install and monitor a dense network of wells. Our results show that resistivity monitoring captured the essential shape of the breakthrough curve and may indicate portions of the stream where the tracer lingered in the hyporheic zone. Time‐lapse resistivity measurements, however, represent time averages over the period required to collect a tomographic data set, and spatial averages over a volume larger than captured by a well sample. Smoothing by the resistivity data inversion algorithm further blurs the resulting tomograph; consequently resistivity monitoring underestimates the degree of fine‐scale heterogeneity in the hyporheic zone.

  4. Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Cai, Weiwei; Liu, Yingzheng

    2018-04-01

    Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.

  5. Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics.

    PubMed

    Yu, Tao; Cai, Weiwei; Liu, Yingzheng

    2018-04-01

    Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.

  6. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, Geoffrey C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  7. DIRECT OBSERVATION OF SOLAR CORONAL MAGNETIC FIELDS BY VECTOR TOMOGRAPHY OF THE CORONAL EMISSION LINE POLARIZATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramar, M.; Lin, H.; Tomczyk, S., E-mail: kramar@cua.edu, E-mail: lin@ifa.hawaii.edu, E-mail: tomczyk@ucar.edu

    We present the first direct “observation” of the global-scale, 3D coronal magnetic fields of Carrington Rotation (CR) Cycle 2112 using vector tomographic inversion techniques. The vector tomographic inversion uses measurements of the Fe xiii 10747 Å Hanle effect polarization signals by the Coronal Multichannel Polarimeter (CoMP) and 3D coronal density and temperature derived from scalar tomographic inversion of Solar Terrestrial Relations Observatory (STEREO)/Extreme Ultraviolet Imager (EUVI) coronal emission lines (CELs) intensity images as inputs to derive a coronal magnetic field model that best reproduces the observed polarization signals. While independent verifications of the vector tomography results cannot be performed, wemore » compared the tomography inverted coronal magnetic fields with those constructed by magnetohydrodynamic (MHD) simulations based on observed photospheric magnetic fields of CR 2112 and 2113. We found that the MHD model for CR 2112 is qualitatively consistent with the tomography inverted result for most of the reconstruction domain except for several regions. Particularly, for one of the most noticeable regions, we found that the MHD simulation for CR 2113 predicted a model that more closely resembles the vector tomography inverted magnetic fields. In another case, our tomographic reconstruction predicted an open magnetic field at a region where a coronal hole can be seen directly from a STEREO-B/EUVI image. We discuss the utilities and limitations of the tomographic inversion technique, and present ideas for future developments.« less

  8. Including Short Period Constraints In the Construction of Full Waveform Tomographic Models

    NASA Astrophysics Data System (ADS)

    Roy, C.; Calo, M.; Bodin, T.; Romanowicz, B. A.

    2015-12-01

    Thanks to the introduction of the Spectral Element Method (SEM) in seismology, which allows accurate computation of the seismic wavefield in complex media, the resolution of regional and global tomographic models has improved in recent years. However, due to computational costs, only long period waveforms are considered, and only long wavelength structure can be constrained. Thus, the resulting 3D models are smooth, and only represent a small volumetric perturbation around a smooth reference model that does not include upper-mantle discontinuities (e.g. MLD, LAB). Extending the computations to shorter periods, necessary for the resolution of smaller scale features, is computationally challenging. In order to overcome these limitations and to account for layered structure in the upper mantle in our full waveform tomography, we include information provided by short period seismic observables (receiver functions and surface wave dispersion), sensitive to sharp boundaries and anisotropic structure respectively. In a first step, receiver functions and dispersion curves are used to generate a number of 1D radially anisotropic shear velocity profiles using a trans-dimensional Markov-chain Monte Carlo (MCMC) algorithm. These 1D profiles include both isotropic and anisotropic discontinuities in the upper mantle (above 300 km depth) beneath selected stationsand are then used to build a 3D starting model for the full waveform tomographic inversion. This model is built after 1) interpolation between the available 1D profiles, and 2) homogeneization of the layered 1D models to obtain an equivalent smooth 3D starting model in the period range of interest for waveform inversion. The waveforms used in the inversion are collected for paths contained in the region of study and filtered at periods longer than 40s. We use the spectral element code "RegSEM" (Cupillard et al., 2012) for forward computations and a quasi-Newton inversion approach in which kernels are computed using normal mode perturbation theory. We present here the first reults of such an approach after successive iterations of a full waveform tomography of the North American continent.

  9. Radial reflection diffraction tomography

    DOEpatents

    Lehman, Sean K.

    2012-12-18

    A wave-based tomographic imaging method and apparatus based upon one or more rotating radially outward oriented transmitting and receiving elements have been developed for non-destructive evaluation. At successive angular locations at a fixed radius, a predetermined transmitting element can launch a primary field and one or more predetermined receiving elements can collect the backscattered field in a "pitch/catch" operation. A Hilbert space inverse wave (HSIW) algorithm can construct images of the received scattered energy waves using operating modes chosen for a particular application. Applications include, improved intravascular imaging, bore hole tomography, and non-destructive evaluation (NDE) of parts having existing access holes.

  10. Radial Reflection diffraction tomorgraphy

    DOEpatents

    Lehman, Sean K

    2013-11-19

    A wave-based tomographic imaging method and apparatus based upon one or more rotating radially outward oriented transmitting and receiving elements have been developed for non-destructive evaluation. At successive angular locations at a fixed radius, a predetermined transmitting element can launch a primary field and one or more predetermined receiving elements can collect the backscattered field in a "pitch/catch" operation. A Hilbert space inverse wave (HSIW) algorithm can construct images of the received scattered energy waves using operating modes chosen for a particular application. Applications include, improved intravascular imaging, bore hole tomography, and non-destructive evaluation (NDE) of parts having existing access holes.

  11. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  12. Full-wave Moment Tensor and Tomographic Inversions Based on 3D Strain Green Tensor

    DTIC Science & Technology

    2010-01-31

    propagation in three-dimensional (3D) earth, linearizes the inverse problem by iteratively updating the earth model , and provides an accurate way to...self-consistent FD-SGT databases constructed from finite-difference simulations of wave propagation in full-wave tomographic models can be used to...determine the moment tensors within minutes after a seismic event, making it possible for real time monitoring using 3D models . 15. SUBJECT TERMS

  13. P and S velocity structure of the crust and the upper mantle beneath central Java from local tomography inversion

    NASA Astrophysics Data System (ADS)

    Koulakov, I.; Bohm, M.; Asch, G.; Lühr, B.-G.; Manzanares, A.; Brotopuspito, K. S.; Fauzi, Pak; Purbawinata, M. A.; Puspito, N. T.; Ratdomopurbo, A.; Kopp, H.; Rabbel, W.; Shevkunova, E.

    2007-08-01

    Here we present the results of local source tomographic inversion beneath central Java. The data set was collected by a temporary seismic network. More than 100 stations were operated for almost half a year. About 13,000 P and S arrival times from 292 events were used to obtain three-dimensional (3-D) Vp, Vs, and Vp/Vs models of the crust and the mantle wedge beneath central Java. Source location and determination of the 3-D velocity models were performed simultaneously based on a new iterative tomographic algorithm, LOTOS-06. Final event locations clearly image the shape of the subduction zone beneath central Java. The dipping angle of the slab increases gradually from almost horizontal to about 70°. A double seismic zone is observed in the slab between 80 and 150 km depth. The most striking feature of the resulting P and S models is a pronounced low-velocity anomaly in the crust, just north of the volcanic arc (Merapi-Lawu anomaly (MLA)). An algorithm for estimation of the amplitude value, which is presented in the paper, shows that the difference between the fore arc and MLA velocities at a depth of 10 km reaches 30% and 36% in P and S models, respectively. The value of the Vp/Vs ratio inside the MLA is more than 1.9. This shows a probable high content of fluids and partial melts within the crust. In the upper mantle we observe an inclined low-velocity anomaly which links the cluster of seismicity at 100 km depth with MLA. This anomaly might reflect ascending paths of fluids released from the slab. The reliability of all these patterns was tested thoroughly.

  14. An efficient algorithm for double-difference tomography and location in heterogeneous media, with an application to the Kilauea volcano

    USGS Publications Warehouse

    Monteiller, V.; Got, J.-L.; Virieux, J.; Okubo, P.

    2005-01-01

    Improving our understanding of crustal processes requires a better knowledge of the geometry and the position of geological bodies. In this study we have designed a method based upon double-difference relocation and tomography to image, as accurately as possible, a heterogeneous medium containing seismogenic objects. Our approach consisted not only of incorporating double difference in tomography but also partly in revisiting tomographic schemes for choosing accurate and stable numerical strategies, adapted to the use of cross-spectral time delays. We used a finite difference solution to the eikonal equation for travel time computation and a Tarantola-Valette approach for both the classical and double-difference three-dimensional tomographic inversion to find accurate earthquake locations and seismic velocity estimates. We estimated efficiently the square root of the inverse model's covariance matrix in the case of a Gaussian correlation function. It allows the use of correlation length and a priori model variance criteria to determine the optimal solution. Double-difference relocation of similar earthquakes is performed in the optimal velocity model, making absolute and relative locations less biased by the velocity model. Double-difference tomography is achieved by using high-accuracy time delay measurements. These algorithms have been applied to earthquake data recorded in the vicinity of Kilauea and Mauna Loa volcanoes for imaging the volcanic structures. Stable and detailed velocity models are obtained: the regional tomography unambiguously highlights the structure of the island of Hawaii and the double-difference tomography shows a detailed image of the southern Kilauea caldera-upper east rift zone magmatic complex. Copyright 2005 by the American Geophysical Union.

  15. Intensity-enhanced MART for tomographic PIV

    NASA Astrophysics Data System (ADS)

    Wang, HongPing; Gao, Qi; Wei, RunJie; Wang, JinJun

    2016-05-01

    A novel technique to shrink the elongated particles and suppress the ghost particles in particle reconstruction of tomographic particle image velocimetry is presented. This method, named as intensity-enhanced multiplicative algebraic reconstruction technique (IntE-MART), utilizes an inverse diffusion function and an intensity suppressing factor to improve the quality of particle reconstruction and consequently the precision of velocimetry. A numerical assessment about vortex ring motion with and without image noise is performed to evaluate the new algorithm in terms of reconstruction, particle elongation and velocimetry. The simulation is performed at seven different seeding densities. The comparison of spatial filter MART and IntE-MART on the probability density function of particle peak intensity suggests that one of the local minima of the distribution can be used to separate the ghosts and actual particles. Thus, ghost removal based on IntE-MART is also introduced. To verify the application of IntE-MART, a real plate turbulent boundary layer experiment is performed. The result indicates that ghost reduction can increase the accuracy of RMS of velocity field.

  16. Eikonal-Based Inversion of GPR Data from the Vaucluse Karst Aquifer

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; van Vorst, D.; Guglielmi, Y.; Cappa, F.; Gaffet, S.

    2009-12-01

    In this paper, we present an easy-to-implement eikonal-based travel time inversion algorithm and apply it to borehole GPR measurement data obtained from a karst aquifer located in the Vaucluse in Provence. The boreholes are situated with a fault zone deep inside the aquifer, in the Laboratoire Souterrain à Bas Bruit (LSBB). The measurements were made using 250 MHz MALA RAMAC borehole GPR antennas. The inversion formulation is unique in its application of a fast-sweeping eikonal solver (Zhao [1]) to the minimization of an objective functional that is composed of a travel time misfit and a model-based regularization [2]. The solver is robust in the presence of large velocity contrasts, efficient, easy to implement, and does not require the use of a sorting algorithm. The computation of sensitivities, which are required for the inversion process, is achieved by tracing rays backward from receiver to source following the gradient of the travel time field [2]. A user wishing to implement this algorithm can opt to avoid the ray tracing step and simply perturb the model to obtain the required sensitivities. Despite the obvious computational inefficiency of such an approach, it is acceptable for 2D problems. The relationship between travel time and the velocity profile is non-linear, requiring an iterative approach to be used. At each iteration, a set of matrix equations is solved to determine the model update. As the inversion continues, the weighting of the regularization parameter is adjusted until an appropriate data misfit is obtained. The inversion results, shown in the attached image, are consistent with previously obtained geological structure. Future work will look at improving inversion resolution and incorporating other measurement methodologies, with the goal of providing useful data for groundwater analysis. References: [1] H. Zhao, “A fast sweeping method for Eikonal equations,” Mathematics of Computation, vol. 74, no. 250, pp. 603-627, 2004. [2] D. Aldridge and D. Oldenburg, “Two-dimensional tomographic inversion with finite-difference traveltimes,” Journal of Seismic Exploration, vol. 2, pp. 257-274, 1993. Recovered Permittivity Profiles

  17. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    NASA Astrophysics Data System (ADS)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  18. Simultaneous, Joint Inversion of Seismic Body Wave Travel Times and Satellite Gravity Data for Three-Dimensional Tomographic Imaging of Western Colombia

    NASA Astrophysics Data System (ADS)

    Dionicio, V.; Rowe, C. A.; Maceira, M.; Zhang, H.; Londoño, J.

    2009-12-01

    We report on the three-dimensional seismic structure of western Colombia determined through the use of a new, simultaneous, joint inversion tomography algorithm. Using data recorded by the national Seismological Network of Colombia (RSNC), we have selected 3,609 earthquakes recorded at 33 sensors distributed throughout the country, with additional data from stations in neighboring countries. 20,338 P-wave arrivals and 17,041 S-wave arrivals are used to invert for structure within a region extending approximately 72.5 to 77.5 degrees West and 2 to 7.5 degrees North. Our algorithm is a modification of the Maceira and Ammon joint inversion code, in combination with the Zhang and Thurber TomoDD (double-difference tomography) program, with a fast LSQR solver operating on the gridded values jointly. The inversion uses gravity anomalies obtained during the GRACE2 satellite mission, and solves using these values with the seismic travel-times through application of an empirical relationship first proposed by Harkrider, mapping densities to Vp and Vs within earth materials. In previous work, Maceira and Ammon demonstrated that incorporation of gravity data predicts shear wave velocities more accurately than the inversion of surface waves alone, particularly in regions where the crust exhibits abrupt and significant lateral variations in lithology, such as the Tarim Basin. The significant complexity of crustal structure in Colombia, due to its active tectonic environment, makes it a good candidate for the application with gravity and body waves. We present the results of this joint inversion and compare it to results obtained using travel times alone

  19. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  20. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE PAGES

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...

    2012-05-01

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  1. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  2. Derivation of site-specific relationships between hydraulic parameters and p-wave velocities based on hydraulic and seismic tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brauchler, R.; Doetsch, J.; Dietrich, P.

    2012-01-10

    In this study, hydraulic and seismic tomographic measurements were used to derive a site-specific relationship between the geophysical parameter p-wave velocity and the hydraulic parameters, diffusivity and specific storage. Our field study includes diffusivity tomograms derived from hydraulic travel time tomography, specific storage tomograms, derived from hydraulic attenuation tomography, and p-wave velocity tomograms, derived from seismic tomography. The tomographic inversion was performed in all three cases with the SIRT (Simultaneous Iterative Reconstruction Technique) algorithm, using a ray tracing technique with curved trajectories. The experimental set-up was designed such that the p-wave velocity tomogram overlaps the hydraulic tomograms by half. Themore » experiments were performed at a wellcharacterized sand and gravel aquifer, located in the Leine River valley near Göttingen, Germany. Access to the shallow subsurface was provided by direct-push technology. The high spatial resolution of hydraulic and seismic tomography was exploited to derive representative site-specific relationships between the hydraulic and geophysical parameters, based on the area where geophysical and hydraulic tests were performed. The transformation of the p-wave velocities into hydraulic properties was undertaken using a k-means cluster analysis. Results demonstrate that the combination of hydraulic and geophysical tomographic data is a promising approach to improve hydrogeophysical site characterization.« less

  3. An observational study on the Strength and Movement of EIA in the Indian zone - Results from the Indian Tomography Experiment (CRABEX)

    NASA Astrophysics Data System (ADS)

    Thampi, S. V.; Devasia, C. V.; Ravindran, S.; Pant, T. K.; Sridharan, R.

    To investigate the equatorial ionospheric processes like the Equatorial Ionization Anomaly (EIA) and Equatorial Spread F and their inter relationships, a network of five stations receiving the 150 and 400 MHz transmissions from the Low Earth Orbiting Satellites (LEOs) covering the region from Trivandrum (8.5°N, Dip ˜0.3N°) to New Delhi (28°N, Dip ˜20°N) is set up along the 77-78°E longitude. The receivers measure the relative phase of 150 MHz with respect to 400 MHz, which is proportional to the slant relative Total Electron Content (TEC) along the line of sight. These simultaneous TEC measurements are inverted to obtain the tomographic image of the latitudinal distribution of electron densities in the meridional plane. The inversion is done using the Algebraic Reconstruction Technique (ART). In this paper, the tomographic images of the equatorial ionosphere along the 77-78° E meridians are presented. The images indicate the movement of the anomaly crest, as well as the strength of EIA at various local times, which in turn control the over all electrodynamics of the evening time ionosphere, favoring the occurrence of Equatorial Spread F (ESF) irregularities. These features are discussed in detail under varying geophysical conditions. The results of the sensitivity analysis of the inversion algorithm using model ionospheres are also presented.

  4. Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Liu, Feng (Inventor); Lax, Melvin (Inventor); Das, Bidyut B. (Inventor)

    1999-01-01

    A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: ##EQU1## wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise, to fluctuations in the absorption (or diffusion) X.sub.j that we are trying to determine: .LAMBDA..sub.ij =.lambda..sub.j .delta..sub.ij with .lambda..sub.j =/<.DELTA.Xj.DELTA.Xj> Y is the data collected at the detectors, and X.sup.k is the kth iterate toward the desired absoption information. An algorithm, which combines a two dimensional (2D) matrix inversion with a one-dimensional (1D) Fourier transform inversion is used to obtain images of three dimensional hidden objects in turbid scattering media.

  5. Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Gayen, Swapan K. (Inventor)

    2000-01-01

    A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise, to fluctuations in the absorption (or diffusion) X.sub.j that we are trying to determine: .LAMBDA..sub.ij =.lambda..sub.j .delta..sub.ij with .lambda..sub.j =/<.DELTA.Xj.DELTA.Xj> Y is the data collected at the detectors, and X.sup.k is the kth iterate toward the desired absorption information. An algorithm, which combines a two dimensional (2D) matrix inversion with a one-dimensional (1D) Fourier transform inversion is used to obtain images of three dimensional hidden objects in turbid scattering media.

  6. Downscaling Smooth Tomographic Models: Separating Intrinsic and Apparent Anisotropy

    NASA Astrophysics Data System (ADS)

    Bodin, Thomas; Capdeville, Yann; Romanowicz, Barbara

    2016-04-01

    In recent years, a number of tomographic models based on full waveform inversion have been published. Due to computational constraints, the fitted waveforms are low pass filtered, which results in an inability to map features smaller than half the shortest wavelength. However, these tomographic images are not a simple spatial average of the true model, but rather an effective, apparent, or equivalent model that provides a similar 'long-wave' data fit. For example, it can be shown that a series of horizontal isotropic layers will be seen by a 'long wave' as a smooth anisotropic medium. In this way, the observed anisotropy in tomographic models is a combination of intrinsic anisotropy produced by lattice-preferred orientation (LPO) of minerals, and apparent anisotropy resulting from the incapacity of mapping discontinuities. Interpretations of observed anisotropy (e.g. in terms of mantle flow) requires therefore the separation of its intrinsic and apparent components. The "up-scaling" relations that link elastic properties of a rapidly varying medium to elastic properties of the effective medium as seen by long waves are strongly non-linear and their inverse highly non-unique. That is, a smooth homogenized effective model is equivalent to a large number of models with discontinuities. In the 1D case, Capdeville et al (GJI, 2013) recently showed that a tomographic model which results from the inversion of low pass filtered waveforms is an homogenized model, i.e. the same as the model computed by upscaling the true model. Here we propose a stochastic method to sample the ensemble of layered models equivalent to a given tomographic profile. We use a transdimensional formulation where the number of layers is variable. Furthermore, each layer may be either isotropic (1 parameter) or intrinsically anisotropic (2 parameters). The parsimonious character of the Bayesian inversion gives preference to models with the least number of parameters (i.e. least number of layers, and maximum number of isotropic layers). The non-uniqueness of the problem can be addressed by adding high frequency data such as receiver functions, able to map first order discontinuities. We show with synthetic tests that this method enables us to distinguish between intrinsic and apparent anisotropy in tomographic models, as layers with intrinsic anisotropy are only present when required by the data. A real data example is presented based on the latest global model produced at Berkeley.

  7. The role of simulated small-scale ocean variability in inverse computations for ocean acoustic tomography.

    PubMed

    Dushaw, Brian D; Sagen, Hanne

    2017-12-01

    Ocean acoustic tomography depends on a suitable reference ocean environment with which to set the basic parameters of the inverse problem. Some inverse problems may require a reference ocean that includes the small-scale variations from internal waves, small mesoscale, or spice. Tomographic inversions that employ data of stable shadow zone arrivals, such as those that have been observed in the North Pacific and Canary Basin, are an example. Estimating temperature from the unique acoustic data that have been obtained in Fram Strait is another example. The addition of small-scale variability to augment a smooth reference ocean is essential to understanding the acoustic forward problem in these cases. Rather than a hindrance, the stochastic influences of the small scale can be exploited to obtain accurate inverse estimates. Inverse solutions are readily obtained, and they give computed arrival patterns that matched the observations. The approach is not ad hoc, but universal, and it has allowed inverse estimates for ocean temperature variations in Fram Strait to be readily computed on several acoustic paths for which tomographic data were obtained.

  8. Active and Passive Hydrologic Tomographic Surveys:A Revolution in Hydrology (Invited)

    NASA Astrophysics Data System (ADS)

    Yeh, T. J.

    2013-12-01

    Mathematical forward or inverse problems of flow through geological media always have unique solutions if necessary conditions are givens. Unique mathematical solutions to forward or inverse modeling of field problems are however always uncertain (an infinite number of possibilities) due to many reasons. They include non-representativeness of the governing equations, inaccurate necessary conditions, multi-scale heterogeneity, scale discrepancies between observation and model, noise and others. Conditional stochastic approaches, which derives the unbiased solution and quantifies the solution uncertainty, are therefore most appropriate for forward and inverse modeling of hydrological processes. Conditioning using non-redundant data sets reduces uncertainty. In this presentation, we explain non-redundant data sets in cross-hole aquifer tests, and demonstrate that active hydraulic tomographic survey (using man-made excitations) is a cost-effective approach to collect the same type but non-redundant data sets for reducing uncertainty in the inverse modeling. We subsequently show that including flux measurements (a piece of non-redundant data set) collected in the same well setup as in hydraulic tomography improves the estimated hydraulic conductivity field. We finally conclude with examples and propositions regarding how to collect and analyze data intelligently by exploiting natural recurrent events (river stage fluctuations, earthquakes, lightning, etc.) as energy sources for basin-scale passive tomographic surveys. The development of information fusion technologies that integrate traditional point measurements and active/passive hydrogeophysical tomographic surveys, as well as advances in sensor, computing, and information technologies may ultimately advance our capability of characterizing groundwater basins to achieve resolution far beyond the feat of current science and technology.

  9. Wavefield simulations of earthquakes in Alaska for tomographic inversion

    NASA Astrophysics Data System (ADS)

    Silwal, V.; Tape, C.; Casarotti, E.

    2017-12-01

    We assemble a catalog of moment tensors and a three-dimensional seismic velocity model for mainland Alaska, in preparation for an iterative tomographic inversion using spectral-element and adjoint methods. The catalog contains approximately 200 earthquakes with Mw ≥ 4.0 that generate good long-period (≥6 s) signals for stations at distances up to approximately 500 km. To maximize the fraction of usable stations per earthquake, we divide our model into three subregions for simulations: south-central Alaska, central Alaska, and eastern Alaska. The primary geometrical interfaces in the model are the Moho surface, the basement surface of major sedimentary basins, and the topographic surface. The crustal and upper mantle tomographic model is from Eberhart-Phillips et al. (2006), but modified by removing the uppermost slow layer, then embedding sedimentary basin models for Cook Inlet basin, Susitna basin, and Nenana basin. We compute 3D synthetic seismograms using the spectral-element method. We demonstrate the accuracy of the initial three-dimensional reference model in each subregion by comparing 3D synthetics with observed data for several earthquakes originating in the crust and underlying subducting slab. Full waveform similarity between data and synthetics over the period range 6 s to 30 s provides a basis for an iterative inversion. The target resolution of the crustal structure is 4 km vertically and 20 km laterally. We use surface wave and body wave measurements from local earthquakes to obtain moment tensors that will be used within our tomographic inversion. Local slab events down to 180 km depth, in additional to pervasive crustal seismicity, should enhance resolution.

  10. Tomographic diffractive microscopy with agile illuminations for imaging targets in a noisy background.

    PubMed

    Zhang, T; Godavarthi, C; Chaumet, P C; Maire, G; Giovannini, H; Talneau, A; Prada, C; Sentenac, A; Belkebir, K

    2015-02-15

    Tomographic diffractive microscopy is a marker-free optical digital imaging technique in which three-dimensional samples are reconstructed from a set of holograms recorded under different angles of incidence. We show experimentally that, by processing the holograms with singular value decomposition, it is possible to image objects in a noisy background that are invisible with classical wide-field microscopy and conventional tomographic reconstruction procedure. The targets can be further characterized with a selective quantitative inversion.

  11. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  12. Comment on 'Imaging of prompt gamma rays emitted during delivery of clinical proton beams with a Compton camera: feasibility studies for range verification'.

    PubMed

    Sitek, Arkadiusz

    2016-12-21

    The origin ensemble (OE) algorithm is a new method used for image reconstruction from nuclear tomographic data. The main advantage of this algorithm is the ease of implementation for complex tomographic models and the sound statistical theory. In this comment, the author provides the basics of the statistical interpretation of OE and gives suggestions for the improvement of the algorithm in the application to prompt gamma imaging as described in Polf et al (2015 Phys. Med. Biol. 60 7085).

  13. Comment on ‘Imaging of prompt gamma rays emitted during delivery of clinical proton beams with a Compton camera: feasibility studies for range verification’

    NASA Astrophysics Data System (ADS)

    Sitek, Arkadiusz

    2016-12-01

    The origin ensemble (OE) algorithm is a new method used for image reconstruction from nuclear tomographic data. The main advantage of this algorithm is the ease of implementation for complex tomographic models and the sound statistical theory. In this comment, the author provides the basics of the statistical interpretation of OE and gives suggestions for the improvement of the algorithm in the application to prompt gamma imaging as described in Polf et al (2015 Phys. Med. Biol. 60 7085).

  14. Research in Image Understanding as Applied to 3-D Microwave Tomographic Imaging with Near Optical Resolution.

    DTIC Science & Technology

    1986-03-10

    and P. Frangos , "Inverse Scattering for Dielectric Media", Annual OSA Meeting, Wash. D.C., Oct. 1985. Invited Presentations 1. N. Farhat, "Tomographic...Optical Computing", DARPA Briefing, ~~April 1985. ... -7--.. , 1% If .% P . .% .% *-. 7777~14e 7-7. K-7 77 Theses 0 P.V. Frangos , "The Electromagnetic

  15. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  16. Fast projection/backprojection and incremental methods applied to synchrotron light tomographic reconstruction.

    PubMed

    de Lima, Camila; Salomão Helou, Elias

    2018-01-01

    Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.

  17. Optical tomographic memories: algorithms for the efficient information readout

    NASA Astrophysics Data System (ADS)

    Pantelic, Dejan V.

    1990-07-01

    Tomographic alogithms are modified in order to reconstruct the inf ormation previously stored by focusing laser radiation in a volume of photosensitive media. Apriori information about the position of bits of inf ormation is used. 1. THE PRINCIPLES OF TOMOGRAPHIC MEMORIES Tomographic principles can be used to store and reconstruct the inf ormation artificially stored in a bulk of a photosensitive media 1 The information is stored by changing some characteristics of a memory material (e. g. refractive index). Radiation from the two independent light sources (e. g. lasers) is f ocused inside the memory material. In this way the intensity of the light is above the threshold only in the localized point where the light rays intersect. By scanning the material the information can be stored in binary or nary format. When the information is stored it can be read by tomographic methods. However the situation is quite different from the classical tomographic problem. Here a lot of apriori information is present regarding the p0- sitions of the bits of information profile representing single bit and a mode of operation (binary or n-ary). 2. ALGORITHMS FOR THE READOUT OF THE TOMOGRAPHIC MEMORIES Apriori information enables efficient reconstruction of the memory contents. In this paper a few methods for the information readout together with the simulation results will be presented. Special attention will be given to the noise considerations. Two different

  18. Noniterative MAP reconstruction using sparse matrix representations.

    PubMed

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  19. Tomographic Imaging of the Seismic Structure Beneath the East Anatolian Plateau, Eastern Turkey

    NASA Astrophysics Data System (ADS)

    Gökalp, Hüseyin

    2012-10-01

    The high level of seismic activity in eastern Turkey is thought to be mainly associated with the continuing collision of the Arabian and Eurasian tectonic plates. The determination of a detailed three-dimensional (3D) structure is crucial for a better understanding of this on-going collision or subduction process; therefore, a body wave tomographic inversion technique was performed on the region. The tomographic inversion used high quality arrival times from earthquakes occurring in the region from 1999 to 2001 recorded by a temporary 29 station broadband IRIS-PASSCAL array operated by research groups from the Universities of Boğaziçi (Turkey) and Cornell (USA). The data was inverted and consisted of 3,114 P- and 2,298 S-wave arrival times from 252 local events with magnitudes ( M D) ranging from 2.5 to 4.8. The stability and resolution of the results were qualitatively assessed by two synthetic tests: a spike test and checkerboard resolution test and it was found that the models were well resolved for most parts of the imaged domain. The tomographic inversion results reveal significant lateral heterogeneities in the study area to a depth of ~20 km. The P- and S-wave velocity models are consistent with each other and provide evidence for marked heterogeneities in the upper crustal structure beneath eastern Turkey. One of the most important features in the acquired tomographic images is the high velocity anomalies, which are generally parallel to the main tectonic units in the region, existing at shallow depths. This may relate to the existence of ophiolitic units at shallow depths. The other feature is that low velocities are widely dispersed through the 3D structure beneath the region at deeper crustal depths. This feature can be an indicator of the mantle upwelling or support the hypothesis that the Anatolian Plateau is underlain by a partially molten uppermost mantle.

  20. Total variation iterative constraint algorithm for limited-angle tomographic reconstruction of non-piecewise-constant structures

    NASA Astrophysics Data System (ADS)

    Krauze, W.; Makowski, P.; Kujawińska, M.

    2015-06-01

    Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.

  1. Angle-domain common imaging gather extraction via Kirchhoff prestack depth migration based on a traveltime table in transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Shaoyong; Gu, Hanming; Tang, Yongjie; Bingkai, Han; Wang, Huazhong; Liu, Dingjin

    2018-04-01

    Angle-domain common image-point gathers (ADCIGs) can alleviate the limitations of common image-point gathers in an offset domain, and have been widely used for velocity inversion and amplitude variation with angle (AVA) analysis. We propose an effective algorithm for generating ADCIGs in transversely isotropic (TI) media based on the gradient of traveltime by Kirchhoff pre-stack depth migration (KPSDM), as the dynamic programming method for computing the traveltime in TI media would not suffer from the limitation of shadow zones and traveltime interpolation. Meanwhile, we present a specific implementation strategy for ADCIG extraction via KPSDM. Three major steps are included in the presented strategy: (1) traveltime computation using a dynamic programming approach in TI media; (2) slowness vector calculation by the gradient of a traveltime table calculated previously; (3) construction of illumination vectors and subsurface angles in the migration process. Numerical examples are included to demonstrate the effectiveness of our approach, which henceforce shows its potential application for subsequent tomographic velocity inversion and AVA.

  2. Validation of Ionosonde Electron Density Reconstruction Algorithms with IONOLAB-RAY in Central Europe

    NASA Astrophysics Data System (ADS)

    Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra

    2016-07-01

    Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.

  3. Crustal Structure Beneath Taiwan Using Frequency-band Inversion of Receiver Function Waveforms

    NASA Astrophysics Data System (ADS)

    Tomfohrde, D. A.; Nowack, R. L.

    Receiver function analysis is used to determine local crustal structure beneath Taiwan. We have performed preliminary data processing and polarization analysis for the selection of stations and events and to increase overall data quality. Receiver function analysis is then applied to data from the Taiwan Seismic Network to obtain radial and transverse receiver functions. Due to the limited azimuthal coverage, only the radial receiver functions are analyzed in terms of horizontally layered crustal structure for each station. In order to improve convergence of the receiver function inversion, frequency-band inversion (FBI) is implemented, in which an iterative inversion procedure with sequentially higher low-pass corner frequencies is used to stabilize the waveform inversion. Frequency-band inversion is applied to receiver functions at six stations of the Taiwan Seismic Network. Initial 20-layer crustal models are inverted for using prior tomographic results for the initial models. The resulting 20-1ayer models are then simplified to 4 to 5 layer models and input into an alternating depth and velocity frequency-band inversion. For the six stations investigated, the resulting simplified models provide an average estimate of 38 km for the Moho thickness surrounding the Central Range of Taiwan. Also, the individual station estimates compare well with the recent tomographic model of and the refraction results of Rau and Wu (1995) and the refraction results of Ma and Song (1997).

  4. A forward model and conjugate gradient inversion technique for low-frequency ultrasonic imaging.

    PubMed

    van Dongen, Koen W A; Wright, William M D

    2006-10-01

    Emerging methods of hyperthermia cancer treatment require noninvasive temperature monitoring, and ultrasonic techniques show promise in this regard. Various tomographic algorithms are available that reconstruct sound speed or contrast profiles, which can be related to temperature distribution. The requirement of a high enough frequency for adequate spatial resolution and a low enough frequency for adequate tissue penetration is a difficult compromise. In this study, the feasibility of using low frequency ultrasound for imaging and temperature monitoring was investigated. The transient probing wave field had a bandwidth spanning the frequency range 2.5-320.5 kHz. The results from a forward model which computed the propagation and scattering of low-frequency acoustic pressure and velocity wave fields were used to compare three imaging methods formulated within the Born approximation, representing two main types of reconstruction. The first uses Fourier techniques to reconstruct sound-speed profiles from projection or Radon data based on optical ray theory, seen as an asymptotical limit for comparison. The second uses backpropagation and conjugate gradient inversion methods based on acoustical wave theory. The results show that the accuracy in localization was 2.5 mm or better when using low frequencies and the conjugate gradient inversion scheme, which could be used for temperature monitoring.

  5. Direct ambient noise tomography for 3-D near surface shear velocity structure: methodology and applications

    NASA Astrophysics Data System (ADS)

    Yao, H.; Fang, H.; Li, C.; Liu, Y.; Zhang, H.; van der Hilst, R. D.; Huang, Y. C.

    2014-12-01

    Ambient noise tomography has provided essential constraints on crustal and uppermost mantle shear velocity structure in global seismology. Recent studies demonstrate that high frequency (e.g., ~ 1 Hz) surface waves between receivers at short distances can be successfully retrieved from ambient noise cross-correlation and then be used for imaging near surface or shallow crustal shear velocity structures. This approach provides important information for strong ground motion prediction in seismically active area and overburden structure characterization in oil and gas fields. Here we propose a new tomographic method to invert all surface wave dispersion data for 3-D variations of shear wavespeed without the intermediate step of phase or group velocity maps.The method uses frequency-dependent propagation paths and a wavelet-based sparsity-constrained tomographic inversion. A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. This avoids the assumption of great-circle propagation that is used in most surface wave tomographic studies, but which is not appropriate in complex media. The wavelet coefficients of the velocity model are estimated with an iteratively reweighted least squares (IRLS) algorithm, and upon iterations the surface wave ray paths and the data sensitivity matrix are updated from the newly obtained velocity model. We apply this new method to determine the 3-D near surface wavespeed variations in the Taipei basin of Taiwan, Hefei urban area and a shale and gas production field in China using the high-frequency interstation Rayleigh wave dispersion data extracted from ambient noisecross-correlation. The results reveal strong effects of off-great-circle propagation of high-frequency surface waves in these regions with above 30% shear wavespeed variations. The proposed approach is more efficient and robust than the traditional two-step surface wave tomography for imaging complex structures. In the future, approximate 3-D sensitivity kernels for dispersion data will be incorporated to account for finite-frequency effect of surface wave propagation. In addition, our approach provides a consistent framework for joint inversion of surface wave dispersion and body wave traveltime data for 3-D Vp and Vs structures.

  6. Developing a cosmic ray muon sampling capability for muon tomography and monitoring applications

    NASA Astrophysics Data System (ADS)

    Chatzidakis, S.; Chrysikopoulou, S.; Tsoukalas, L. H.

    2015-12-01

    In this study, a cosmic ray muon sampling capability using a phenomenological model that captures the main characteristics of the experimentally measured spectrum coupled with a set of statistical algorithms is developed. The "muon generator" produces muons with zenith angles in the range 0-90° and energies in the range 1-100 GeV and is suitable for Monte Carlo simulations with emphasis on muon tomographic and monitoring applications. The muon energy distribution is described by the Smith and Duller (1959) [35] phenomenological model. Statistical algorithms are then employed for generating random samples. The inverse transform provides a means to generate samples from the muon angular distribution, whereas the Acceptance-Rejection and Metropolis-Hastings algorithms are employed to provide the energy component. The predictions for muon energies 1-60 GeV and zenith angles 0-90° are validated with a series of actual spectrum measurements and with estimates from the software library CRY. The results confirm the validity of the phenomenological model and the applicability of the statistical algorithms to generate polyenergetic-polydirectional muons. The response of the algorithms and the impact of critical parameters on computation time and computed results were investigated. Final output from the proposed "muon generator" is a look-up table that contains the sampled muon angles and energies and can be easily integrated into Monte Carlo particle simulation codes such as Geant4 and MCNP.

  7. Tomographic imaging of flourescence resonance energy transfer in highly light scattering media

    NASA Astrophysics Data System (ADS)

    Soloviev, Vadim Y.; McGinty, James; Tahir, Khadija B.; Laine, Romain; Stuckey, Daniel W.; Mohan, P. Surya; Hajnal, Joseph V.; Sardini, Alessandro; French, Paul M. W.; Arridge, Simon R.

    2010-02-01

    Three-dimensional localization of protein conformation changes in turbid media using Förster Resonance Energy Transfer (FRET) was investigated by tomographic fluorescence lifetime imaging (FLIM). FRET occurs when a donor fluorophore, initially in its electronic excited state, transfers energy to an acceptor fluorophore in close proximity through non-radiative dipole-dipole coupling. An acceptor effectively behaves as a quencher of the donor's fluorescence. The quenching process is accompanied by a reduction in the quantum yield and lifetime of the donor fluorophore. Therefore, FRET can be localized by imaging changes in the quantum yield and the fluorescence lifetime of the donor fluorophore. Extending FRET to diffuse optical tomography has potentially important applications such as in vivo studies in small animal. We show that FRET can be localized by reconstructing the quantum yield and lifetime distribution from time-resolved non-invasive boundary measurements of fluorescence and transmitted excitation radiation. Image reconstruction was obtained by an inverse scattering algorithm. Thus we report, to the best of our knowledge, the first tomographic FLIM-FRET imaging in turbid media. The approach is demonstrated by imaging a highly scattering cylindrical phantom concealing two thin wells containing cytosol preparations of HEK293 cells expressing TN-L15, a cytosolic genetically-encoded calcium FRET sensor. A 10mM calcium chloride solution was added to one of the wells to induce a protein conformation change upon binding to TN-L15, resulting in FRET and a corresponding decrease in the donor fluorescence lifetime. The resulting fluorescence lifetime distribution, the quantum efficiency, absorption and scattering coefficients were reconstructed.

  8. The shifting zoom: new possibilities for inverse scattering on electrically large domains

    NASA Astrophysics Data System (ADS)

    Persico, Raffaele; Ludeno, Giovanni; Soldovieri, Francesco; De Coster, Alberic; Lambot, Sebastien

    2017-04-01

    Inverse scattering is a subject of great interest in diagnostic problems, which are in their turn of interest for many applicative problems as investigation of cultural heritage, characterization of foundations or subservices, identification of unexploded ordnances and so on [1-4]. In particular, GPR data are usually focused by means of migration algorithms, essentially based on a linear approximation of the scattering phenomenon. Migration algorithms are popular because they are computationally efficient and do not require the inversion of a matrix, neither the calculation of the elements of a matrix. In fact, they are essentially based on the adjoint of the linearised scattering operator, which allows in the end to write the inversion formula as a suitably weighted integral of the data [5]. In particular, this makes a migration algorithm more suitable than a linear microwave tomography inversion algorithm for the reconstruction of an electrically large investigation domain. However, this computational challenge can be overcome by making use of investigation domains joined side by side, as proposed e.g. in ref. [3]. This allows to apply a microwave tomography algorithm even to large investigation domains. However, the joining side by side of sequential investigation domains introduces a problem of limited (and asymmetric) maximum view angle with regard to the targets occurring close to the edges between two adjacent domains, or possibly crossing these edges. The shifting zoom is a method that allows to overcome this difficulty by means of overlapped investigation and observation domains [6-7]. It requires more sequential inversion with respect to adjacent investigation domains, but the really required extra-time is minimal because the matrix to be inverted is calculated ones and for all, as well as its singular value decomposition: what is repeated more time is only a fast matrix-vector multiplication. References [1] M. Pieraccini, L. Noferini, D. Mecatti, C. Atzeni, R. Persico, F. Soldovieri, Advanced Processing Techniques for Step-frequency Continuous-Wave Penetrating Radar: the Case Study of "Palazzo Vecchio" Walls (Firenze, Italy), Research on Nondestructive Evaluation, vol. 17, pp. 71-83, 2006. [2] N. Masini, R. Persico, E. Rizzo, A. Calia, M. T. Giannotta, G. Quarta, A. Pagliuca, "Integrated Techniques for Analysis and Monitoring of Historical Monuments: the case of S.Giovanni al Sepolcro in Brindisi (Southern Italy)." Near Surface Geophysics, vol. 8 (5), pp. 423-432, 2010. [3] E. Pettinelli, A. Di Matteo, E. Mattei, L. Crocco, F. Soldovieri, J. D. Redman, and A. P. Annan, "GPR response from buried pipes: Measurement on field site and tomographic reconstructions", IEEE Transactions on Geoscience and Remote Sensing, vol. 47, n. 8, 2639-2645, Aug. 2009. [4] O. Lopera, E. C. Slob, N. Milisavljevic and S. Lambot, "Filtering soil surface and antenna effects from GPR data to enhance landmine detection", IEEE Transactions on Geoscience and Remote Sensing, vol. 45, n. 3, pp.707-717, 2007. [5] R. Persico, "Introduction to Ground Penetrating Radar: Inverse Scattering and Data Processing". Wiley, 2014. [6] R. Persico, J. Sala, "The problem of the investigation domain subdivision in 2D linear inversions for large scale GPR data", IEEE Geoscience and Remote Sensing Letters, vol. 11, n. 7, pp. 1215-1219, doi 10.1109/LGRS.2013.2290008, July 2014. [7] R. Persico, F. Soldovieri, S. Lambot, Shifting zoom in 2D linear inversions performed on GPR data gathered along an electrically large investigation domain, Proc. 16th International Conference on Ground Penetrating Radar GPR2016, Honk-Kong, June 13-16, 2016

  9. Singular value decomposition: a diagnostic tool for ill-posed inverse problems in optical computed tomography

    NASA Astrophysics Data System (ADS)

    Lanen, Theo A.; Watt, David W.

    1995-10-01

    Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.

  10. On a gas electron multiplier based synthetic diagnostic for soft x-ray tomography on WEST with focus on impurity transport studies

    NASA Astrophysics Data System (ADS)

    Jardin, A.; Mazon, D.; Malard, P.; O'Mullane, M.; Chernyshova, M.; Czarski, T.; Malinowski, K.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.

    2017-08-01

    The tokamak WEST aims at testing ITER divertor high heat flux component technology in long pulse operation. Unfortunately, heavy impurities like tungsten (W) sputtered from the plasma facing components can pollute the plasma core by radiation cooling in the soft x-ray (SXR) range, which is detrimental for the energy confinement and plasma stability. SXR diagnostics give valuable information to monitor impurities and study their transport. The WEST SXR diagnostic is composed of two new cameras based on the Gas Electron Multiplier (GEM) technology. The WEST GEM cameras will be used for impurity transport studies by performing 2D tomographic reconstructions with spectral resolution in tunable energy bands. In this paper, we characterize the GEM spectral response and investigate W density reconstruction thanks to a synthetic diagnostic recently developed and coupled with a tomography algorithm based on the minimum Fisher information (MFI) inversion method. The synthetic diagnostic includes the SXR source from a given plasma scenario, the photoionization, electron cloud transport and avalanche in the detection volume using Magboltz, and tomographic reconstruction of the radiation from the GEM signal. Preliminary studies of the effect of transport on the W ionization equilibrium and on the reconstruction capabilities are also presented.

  11. ODTbrain: a Python library for full-view, dense diffraction tomography.

    PubMed

    Müller, Paul; Schürmann, Mirjam; Guck, Jochen

    2015-11-04

    Analyzing the three-dimensional (3D) refractive index distribution of a single cell makes it possible to describe and characterize its inner structure in a marker-free manner. A dense, full-view tomographic data set is a set of images of a cell acquired for multiple rotational positions, densely distributed from 0 to 360 degrees. The reconstruction is commonly realized by projection tomography, which is based on the inversion of the Radon transform. The reconstruction quality of projection tomography is greatly improved when first order scattering, which becomes relevant when the imaging wavelength is comparable to the characteristic object size, is taken into account. This advanced reconstruction technique is called diffraction tomography. While many implementations of projection tomography are available today, there is no publicly available implementation of diffraction tomography so far. We present a Python library that implements the backpropagation algorithm for diffraction tomography in 3D. By establishing benchmarks based on finite-difference time-domain (FDTD) simulations, we showcase the superiority of the backpropagation algorithm over the backprojection algorithm. Furthermore, we discuss how measurment parameters influence the reconstructed refractive index distribution and we also give insights into the applicability of diffraction tomography to biological cells. The present software library contains a robust implementation of the backpropagation algorithm. The algorithm is ideally suited for the application to biological cells. Furthermore, the implementation is a drop-in replacement for the classical backprojection algorithm and is made available to the large user community of the Python programming language.

  12. Muon tomography imaging algorithms for nuclear threat detection inside large volume containers with the Muon Portal detector

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.

    2013-11-01

    Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.

  13. Surface wave tomography of Europe from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Stehly, Laurent; Paul, Anne

    2017-04-01

    We present a European scale high-resolution 3-D shear wave velocity model derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous seismic recordings from 1293 stations across much of the European region (10˚W-35˚E, 30˚N-75˚N), which yields more than 0.8 million virtual station pairs. This data set compiles records from 67 seismic networks, both permanent and temporary from the EIDA (European Integrated Data Archive). Rayleigh wave group velocity are measured at each station pair using the multiple-filter analysis technique. Group velocity maps are estimated through a linearized tomographic inversion algorithm at period from 5s to 100s. Adaptive parameterization is used to accommodate heterogeneity in data coverage. We then apply a two-step data-driven inversion method to obtain the shear wave velocity model. The two steps refer to a Monte Carlo inversion to build the starting model, followed by a linearized inversion for further improvement. Finally, Moho depth (and its uncertainty) are determined over most of our study region by identifying and analysing sharp velocity discontinuities (and sharpness). The resulting velocity model shows good agreement with main geological features and previous geophyical studies. Moho depth coincides well with that obtained from active seismic experiments. A focus on the Greater Alpine region (covered by the AlpArray seismic network) displays a clear crustal thinning that follows the arcuate shape of the Alps from the southern French Massif Central to southern Germany.

  14. Full seismic waveform tomography for upper-mantle structure in the Australasian region using adjoint methods

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; Kennett, Brian L. N.; Igel, Heiner; Bunge, Hans-Peter

    2009-12-01

    We present a full seismic waveform tomography for upper-mantle structure in the Australasian region. Our method is based on spectral-element simulations of seismic wave propagation in 3-D heterogeneous earth models. The accurate solution of the forward problem ensures that waveform misfits are solely due to as yet undiscovered Earth structure and imprecise source descriptions, thus leading to more realistic tomographic images and source parameter estimates. To reduce the computational costs, we implement a long-wavelength equivalent crustal model. We quantify differences between the observed and the synthetic waveforms using time-frequency (TF) misfits. Their principal advantages are the separation of phase and amplitude misfits, the exploitation of complete waveform information and a quasi-linear relation to 3-D Earth structure. Fréchet kernels for the TF misfits are computed via the adjoint method. We propose a simple data compression scheme and an accuracy-adaptive time integration of the wavefields that allows us to reduce the storage requirements of the adjoint method by almost two orders of magnitude. To minimize the waveform phase misfit, we implement a pre-conditioned conjugate gradient algorithm. Amplitude information is incorporated indirectly by a restricted line search. This ensures that the cumulative envelope misfit does not increase during the inversion. An efficient pre-conditioner is found empirically through numerical experiments. It prevents the concentration of structural heterogeneity near the sources and receivers. We apply our waveform tomographic method to ~1000 high-quality vertical-component seismograms, recorded in the Australasian region between 1993 and 2008. The waveforms comprise fundamental- and higher-mode surface and long-period S body waves in the period range from 50 to 200 s. To improve the convergence of the algorithm, we implement a 3-D initial model that contains the long-wavelength features of the Australasian region. Resolution tests indicate that our algorithm converges after around 10 iterations and that both long- and short-wavelength features in the uppermost mantle are well resolved. There is evidence for effects related to the non-linearity in the inversion procedure. After 11 iterations we fit the data waveforms acceptably well; with no significant further improvements to be expected. During the inversion the total fitted seismogram length increases by 46 per cent, providing a clear indication of the efficiency and consistency of the iterative optimization algorithm. The resulting SV-wave velocity model reveals structural features of the Australasian upper mantle with great detail. We confirm the existence of a pronounced low-velocity band along the eastern margin of the continent that can be clearly distinguished against Precambrian Australia and the microcontinental Lord Howe Rise. The transition from Precambrian to Phanerozoic Australia (the Tasman Line) appears to be sharp down to at least 200 km depth. It mostly occurs further east of where it is inferred from gravity and magnetic anomalies. Also clearly visible are the Archean and Proterozoic cratons, the northward continuation of the continent and anomalously low S-wave velocities in the upper mantle in central Australia. This is, to the best of our knowledge, the first application of non-linear full seismic waveform tomography to a continental-scale problem.

  15. Tomographic imaging of Central Java, Indonesia: Preliminary result of joint inversion of the MERAMEX and MCGA earthquake data

    NASA Astrophysics Data System (ADS)

    Rohadi, Supriyanto; Widiyantoro, Sri; Nugraha, Andri Dian; Masturyono

    2013-09-01

    The realization of local earthquake tomography is usually conducted by removing distant events outside the study region, because these events may increase errors. In this study, tomographic inversion has been conducted using the travel time data of local and regional events in order to improve the structural resolution, especially for deep structures. We used the local MERapi Amphibious EXperiments (MERAMEX) data catalog that consists of 292 events from May to October 2004. The additional new data of regional events in the Java region were taken from the Meteorological, Climatological, and Geophysical Agency (MCGA) of Indonesia, which consist of 882 events, having at least 10 recording phases at each seismographic station from April 2009 to February 2011. We have conducted joint inversions of the combined data sets using double-difference tomography to invert for velocity structures and to conduct hypocenter relocation simultaneously. The checkerboard test results of Vp and Vs structures demonstrate a significantly improved spatial resolution from the shallow crust down to a depth of 165 km. Our tomographic inversions reveal a low velocity anomaly beneath the Lawu - Merapi zone, which is consistent with the results from previous studies. A strong velocity anomaly zone with low Vp, low Vs and low Vp/Vs is also identified between Cilacap and Banyumas. We interpret this anomaly as a fluid content material with large aspect ratio or sediment layer. This anomaly zone is in a good agreement with the existence of a large dome containing sediment in this area as proposed by previous geological studies. A low velocity anomaly zone is also detected in Kebumen, where it may be related to the extensional oceanic basin toward the land.

  16. Tomographic imaging of Central Java, Indonesia: Preliminary result of joint inversion of the MERAMEX and MCGA earthquake data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohadi, Supriyanto; Widiyantoro, Sri; Nugraha, Andri Dian

    The realization of local earthquake tomography is usually conducted by removing distant events outside the study region, because these events may increase errors. In this study, tomographic inversion has been conducted using the travel time data of local and regional events in order to improve the structural resolution, especially for deep structures. We used the local MERapi Amphibious EXperiments (MERAMEX) data catalog that consists of 292 events from May to October 2004. The additional new data of regional events in the Java region were taken from the Meteorological, Climatological, and Geophysical Agency (MCGA) of Indonesia, which consist of 882 events,more » having at least 10 recording phases at each seismographic station from April 2009 to February 2011. We have conducted joint inversions of the combined data sets using double-difference tomography to invert for velocity structures and to conduct hypocenter relocation simultaneously. The checkerboard test results of Vp and Vs structures demonstrate a significantly improved spatial resolution from the shallow crust down to a depth of 165 km. Our tomographic inversions reveal a low velocity anomaly beneath the Lawu - Merapi zone, which is consistent with the results from previous studies. A strong velocity anomaly zone with low Vp, low Vs and low Vp/Vs is also identified between Cilacap and Banyumas. We interpret this anomaly as a fluid content material with large aspect ratio or sediment layer. This anomaly zone is in a good agreement with the existence of a large dome containing sediment in this area as proposed by previous geological studies. A low velocity anomaly zone is also detected in Kebumen, where it may be related to the extensional oceanic basin toward the land.« less

  17. Assessing the resolution-dependent utility of tomograms for geostatistics

    USGS Publications Warehouse

    Day-Lewis, F. D.; Lane, J.W.

    2004-01-01

    Geophysical tomograms are used increasingly as auxiliary data for geostatistical modeling of aquifer and reservoir properties. The correlation between tomographic estimates and hydrogeologic properties is commonly based on laboratory measurements, co-located measurements at boreholes, or petrophysical models. The inferred correlation is assumed uniform throughout the interwell region; however, tomographic resolution varies spatially due to acquisition geometry, regularization, data error, and the physics underlying the geophysical measurements. Blurring and inversion artifacts are expected in regions traversed by few or only low-angle raypaths. In the context of radar traveltime tomography, we derive analytical models for (1) the variance of tomographic estimates, (2) the spatially variable correlation with a hydrologic parameter of interest, and (3) the spatial covariance of tomographic estimates. Synthetic examples demonstrate that tomograms of qualitative value may have limited utility for geostatistics; moreover, the imprint of regularization may preclude inference of meaningful spatial statistics from tomograms.

  18. Regridding reconstruction algorithm for real-time tomographic imaging

    PubMed Central

    Marone, F.; Stampanoni, M.

    2012-01-01

    Sub-second temporal-resolution tomographic microscopy is becoming a reality at third-generation synchrotron sources. Efficient data handling and post-processing is, however, difficult when the data rates are close to 10 GB s−1. This bottleneck still hinders exploitation of the full potential inherent in the ultrafast acquisition speed. In this paper the fast reconstruction algorithm gridrec, highly optimized for conventional CPU technology, is presented. It is shown that gridrec is a valuable alternative to standard filtered back-projection routines, despite being based on the Fourier transform method. In fact, the regridding procedure used for resampling the Fourier space from polar to Cartesian coordinates couples excellent performance with negligible accuracy degradation. The stronger dependence of the observed signal-to-noise ratio for gridrec reconstructions on the number of angular views makes the presented algorithm even superior to filtered back-projection when the tomographic problem is well sampled. Gridrec not only guarantees high-quality results but it provides up to 20-fold performance increase, making real-time monitoring of the sub-second acquisition process a reality. PMID:23093766

  19. Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Nowack, Robert L.; Li, Cuiping

    The inversion of seismic travel-time data for radially varying media was initially investigated by Herglotz, Wiechert, and Bateman (the HWB method) in the early part of the 20th century [1]. Tomographic inversions for laterally varying media began in seismology starting in the 1970’s. This included early work by Aki, Christoffersson, and Husebye who developed an inversion technique for estimating lithospheric structure beneath a seismic array from distant earthquakes (the ACH method) [2]. Also, Alekseev and others in Russia performed early inversions of refraction data for laterally varying upper mantle structure [3]. Aki and Lee [4] developed an inversion technique using travel-time data from local earthquakes.

  20. SXR measurement and W transport survey using GEM tomographic system on WEST

    NASA Astrophysics Data System (ADS)

    Mazon, D.; Jardin, A.; Malard, P.; Chernyshova, M.; Coston, C.; Malard, P.; O'Mullane, M.; Czarski, T.; Malinowski, K.; Faisse, F.; Ferlay, F.; Verger, J. M.; Bec, A.; Larroque, S.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.

    2017-11-01

    Measuring Soft X-Ray (SXR) radiation (0.1-20 keV) of fusion plasmas is a standard way of accessing valuable information on particle transport. Since heavy impurities like tungsten (W) could degrade plasma core performances and cause radiative collapses, it is necessary to develop new diagnostics to be able to monitor the impurity distribution in harsh fusion environments like ITER. A gaseous detector with energy discrimination would be a very good candidate for this purpose. The design and implementation of a new SXR diagnostic developed for the WEST project, based on a triple Gas Electron Multiplier (GEM) detector is presented. This detector works in photon counting mode and presents energy discrimination capabilities. The SXR system is composed of two 1D cameras (vertical and horizontal views respectively), located in the same poloidal cross-section to allow for tomographic reconstruction. An array (20 cm × 2 cm) consists of up to 128 detectors in front of a beryllium pinhole (equipped with a 1 mm diameter diaphragm) inserted at about 50 cm depth inside a cooled thimble in order to retrieve a wide plasma view. Acquisition of low energy spectrum is insured by a helium buffer installed between the pinhole and the detector. Complementary cooling systems (water) are used to maintain a constant temperature (25oC) inside the thimble. Finally a real-time automatic extraction system has been developed to protect the diagnostic during baking phases or any overheating unwanted events. Preliminary simulations of plasma emissivity and W distribution have been performed for WEST using a recently developed synthetic diagnostic coupled to a tomographic algorithm based on the minimum Fisher information (MFI) inversion method. First GEM acquisitions are presented as well as estimation of transport effect in presence of ICRH on W density reconstruction capabilities of the GEM.

  1. Object-based inversion of crosswell radar tomography data to monitor vegetable oil injection experiments

    USGS Publications Warehouse

    Lane, John W.; Day-Lewis, Frederick D.; Versteeg, Roelof J.; Casey, Clifton C.

    2004-01-01

    Crosswell radar methods can be used to dynamically image ground-water flow and mass transport associated with tracer tests, hydraulic tests, and natural physical processes, for improved characterization of preferential flow paths and complex aquifer heterogeneity. Unfortunately, because the raypath coverage of the interwell region is limited by the borehole geometry, the tomographic inverse problem is typically underdetermined, and tomograms may contain artifacts such as spurious blurring or streaking that confuse interpretation.We implement object-based inversion (using a constrained, non-linear, least-squares algorithm) to improve results from pixel-based inversion approaches that utilize regularization criteria, such as damping or smoothness. Our approach requires pre- and post-injection travel-time data. Parameterization of the image plane comprises a small number of objects rather than a large number of pixels, resulting in an overdetermined problem that reduces the need for prior information. The nature and geometry of the objects are based on hydrologic insight into aquifer characteristics, the nature of the experiment, and the planned use of the geophysical results.The object-based inversion is demonstrated using synthetic and crosswell radar field data acquired during vegetable-oil injection experiments at a site in Fridley, Minnesota. The region where oil has displaced ground water is discretized as a stack of rectangles of variable horizontal extents. The inversion provides the geometry of the affected region and an estimate of the radar slowness change for each rectangle. Applying petrophysical models to these results and porosity from neutron logs, we estimate the vegetable-oil emulsion saturation in various layers.Using synthetic- and field-data examples, object-based inversion is shown to be an effective strategy for inverting crosswell radar tomography data acquired to monitor the emplacement of vegetable-oil emulsions. A principal advantage of object-based inversion is that it yields images that hydrologists and engineers can easily interpret and use for model calibration.

  2. Anisotropic Lithospheric layering in the North American craton, revealed by Bayesian inversion of short and long period data

    NASA Astrophysics Data System (ADS)

    Roy, Corinna; Calo, Marco; Bodin, Thomas; Romanowicz, Barbara

    2016-04-01

    Competing hypotheses for the formation and evolution of continents are highly under debate, including the theory of underplating by hot plumes or accretion by shallow subduction in continental or arc settings. In order to support these hypotheses, documenting structural layering in the cratonic lithosphere becomes especially important. Recent studies of seismic-wave receiver function data have detected a structural boundary under continental cratons at 100-140 km depths, which is too shallow to be consistent with the lithosphere-asthenosphere boundary, as inferred from seismic tomography and other geophysical studies. This leads to the conclusion that 1) the cratonic lithosphere may be thinner than expected, contradicting tomographic and other geophysical or geochemical inferences, or 2) that the receiver function studies detect a mid-lithospheric discontinuity rather than the LAB. On the other hand, several recent studies documented significant changes in the direction of azimuthal anisotropy with depth that suggest layering in the anisotropic structure of the stable part of the North American continent. In particular, Yuan and Romanowicz (2010) combined long period surface wave and overtone data with core refracted shear wave (SKS) splitting measurements in a joint tomographic inversion. A question that arises is whether the anisotropic layering observed coincides with that obtained from receiver function studies. To address this question, we use a trans-dimensional Markov-chain Monte Carlo (MCMC) algorithm to generate probabilistic 1D radially and azimuthal anisotropic shear wave velocity profiles for selected stations in North America. In the algorithm we jointly invert short period (Ps Receiver Functions, surface wave dispersion for Love and Rayleigh waves) and long period data (SKS waveforms). By including three different data types, which sample different volumes of the Earth and have different sensitivities to 
structure, we overcome the problem of incompatible interpretations of models provided by only one data set. The resulting 1D profiles include both isotropic and anisotropic discontinuities in the upper mantle (above 350 km depth). The huge advantage of our procedure is the avoidance of any intermediate processing steps such as numerical deconvolution or the calculation of splitting parameters, which can be very sensitive to noise. Additionally, the number of layers, as well as the data noise and the presence of anisotropy are treated as unknowns in the transdimensional Monte Carlo Markov chain algorithm. We recently demonstrated the power of this approach in the case of two stations located in different tectonic settings (Bodin et al., 2015, submitted). Here we extend this approach to a broader range of settings within the north American continent.

  3. Electrical resistance tomography from measurements inside a steel cased borehole

    DOEpatents

    Daily, William D.; Schenkel, Clifford; Ramirez, Abelardo L.

    2000-01-01

    Electrical resistance tomography (ERT) produced from measurements taken inside a steel cased borehole. A tomographic inversion of electrical resistance measurements made within a steel casing was then made for the purpose of imaging the electrical resistivity distribution in the formation remotely from the borehole. The ERT method involves combining electrical resistance measurements made inside a steel casing of a borehole to determine the electrical resistivity in the formation adjacent to the borehole; and the inversion of electrical resistance measurements made from a borehole not cased with an electrically conducting casing to determine the electrical resistivity distribution remotely from a borehole. It has been demonstrated that by using these combined techniques, highly accurate current injection and voltage measurements, made at appropriate points within the casing, can be tomographically inverted to yield useful information outside the borehole casing.

  4. Inverse scattering and refraction corrected reflection for breast cancer imaging

    NASA Astrophysics Data System (ADS)

    Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John

    2010-03-01

    Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.

  5. Bayesian statistical ionospheric tomography improved by incorporating ionosonde measurements

    NASA Astrophysics Data System (ADS)

    Norberg, Johannes; Virtanen, Ilkka I.; Roininen, Lassi; Vierinen, Juha; Orispää, Mikko; Kauristie, Kirsti; Lehtinen, Markku S.

    2016-04-01

    We validate two-dimensional ionospheric tomography reconstructions against EISCAT incoherent scatter radar measurements. Our tomography method is based on Bayesian statistical inversion with prior distribution given by its mean and covariance. We employ ionosonde measurements for the choice of the prior mean and covariance parameters and use the Gaussian Markov random fields as a sparse matrix approximation for the numerical computations. This results in a computationally efficient tomographic inversion algorithm with clear probabilistic interpretation. We demonstrate how this method works with simultaneous beacon satellite and ionosonde measurements obtained in northern Scandinavia. The performance is compared with results obtained with a zero-mean prior and with the prior mean taken from the International Reference Ionosphere 2007 model. In validating the results, we use EISCAT ultra-high-frequency incoherent scatter radar measurements as the ground truth for the ionization profile shape. We find that in comparison to the alternative prior information sources, ionosonde measurements improve the reconstruction by adding accurate information about the absolute value and the altitude distribution of electron density. With an ionosonde at continuous disposal, the presented method enhances stand-alone near-real-time ionospheric tomography for the given conditions significantly.

  6. Optimization-Based Approach for Joint X-Ray Fluorescence and Transmission Tomographic Inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Zichao; Leyffer, Sven; Wild, Stefan M.

    2016-01-01

    Fluorescence tomographic reconstruction, based on the detection of photons coming from fluorescent emission, can be used for revealing the internal elemental composition of a sample. On the other hand, conventional X-ray transmission tomography can be used for reconstructing the spatial distribution of the absorption coefficient inside a sample. In this work, we integrate both X-ray fluorescence and X-ray transmission data modalities and formulate a nonlinear optimization-based approach for reconstruction of the elemental composition of a given object. This model provides a simultaneous reconstruction of both the quantitative spatial distribution of all elements and the absorption effect in the sample. Mathematicallymore » speaking, we show that compared with the single-modality inversion (i.e., the X-ray transmission or fluorescence alone), the joint inversion provides a better-posed problem, which implies a better recovery. Therefore, the challenges in X-ray fluorescence tomography arising mainly from the effects of self-absorption in the sample are partially mitigated. The use of this technique is demonstrated on the reconstruction of several synthetic samples.« less

  7. Autonomous calibration of single spin qubit operations

    NASA Astrophysics Data System (ADS)

    Frank, Florian; Unden, Thomas; Zoller, Jonathan; Said, Ressa S.; Calarco, Tommaso; Montangero, Simone; Naydenov, Boris; Jelezko, Fedor

    2017-12-01

    Fully autonomous precise control of qubits is crucial for quantum information processing, quantum communication, and quantum sensing applications. It requires minimal human intervention on the ability to model, to predict, and to anticipate the quantum dynamics, as well as to precisely control and calibrate single qubit operations. Here, we demonstrate single qubit autonomous calibrations via closed-loop optimisations of electron spin quantum operations in diamond. The operations are examined by quantum state and process tomographic measurements at room temperature, and their performances against systematic errors are iteratively rectified by an optimal pulse engineering algorithm. We achieve an autonomous calibrated fidelity up to 1.00 on a time scale of minutes for a spin population inversion and up to 0.98 on a time scale of hours for a single qubit π/2 -rotation within the experimental error of 2%. These results manifest a full potential for versatile quantum technologies.

  8. In vivo fluorescence lifetime tomography of a FRET probe expressed in mouse

    PubMed Central

    McGinty, James; Stuckey, Daniel W.; Soloviev, Vadim Y.; Laine, Romain; Wylezinska-Arridge, Marzena; Wells, Dominic J.; Arridge, Simon R.; French, Paul M. W.; Hajnal, Joseph V.; Sardini, Alessandro

    2011-01-01

    Förster resonance energy transfer (FRET) is a powerful biological tool for reading out cell signaling processes. In vivo use of FRET is challenging because of the scattering properties of bulk tissue. By combining diffuse fluorescence tomography with fluorescence lifetime imaging (FLIM), implemented using wide-field time-gated detection of fluorescence excited by ultrashort laser pulses in a tomographic imaging system and applying inverse scattering algorithms, we can reconstruct the three dimensional spatial localization of fluorescence quantum efficiency and lifetime. We demonstrate in vivo spatial mapping of FRET between genetically expressed fluorescent proteins in live mice read out using FLIM. Following transfection by electroporation, mouse hind leg muscles were imaged in vivo and the emission of free donor (eGFP) in the presence of free acceptor (mCherry) could be clearly distinguished from the fluorescence of the donor when directly linked to the acceptor in a tandem (eGFP-mCherry) FRET construct. PMID:21750768

  9. Influence of the limited detector size on spatial variations of the reconstruction accuracy in holographic tomography

    NASA Astrophysics Data System (ADS)

    Kostencka, Julianna; Kozacki, Tomasz; Hennelly, Bryan; Sheridan, John T.

    2017-06-01

    Holographic tomography (HT) allows noninvasive, quantitative, 3D imaging of transparent microobjects, such as living biological cells and fiber optics elements. The technique is based on acquisition of multiple scattered fields for various sample perspectives using digital holographic microscopy. Then, the captured data is processed with one of the tomographic reconstruction algorithms, which enables 3D reconstruction of refractive index distribution. In our recent works we addressed the issue of spatially variant accuracy of the HT reconstructions, which results from the insufficient model of diffraction that is applied in the widely-used tomographic reconstruction algorithms basing on the Rytov approximation. In the present study, we continue investigating the spatially variant properties of the HT imaging, however, we are now focusing on the limited spatial size of holograms as a source of this problem. Using the Wigner distribution representation and the Ewald sphere approach, we show that the limited size of the holograms results in a decreased quality of tomographic imaging in off-center regions of the HT reconstructions. This is because the finite detector extent becomes a limiting aperture that prohibits acquisition of full information about diffracted fields coming from the out-of-focus structures of a sample. The incompleteness of the data results in an effective truncation of the tomographic transfer function for the out-of-center regions of the tomographic image. In this paper, the described effect is quantitatively characterized for three types of the tomographic systems: the configuration with 1) object rotation, 2) scanning of the illumination direction, 3) the hybrid HT solution combing both previous approaches.

  10. Broadband Ground Motion Synthesis of the 1999 Turkey Earthquakes Based On: 3-D Velocity Inversion, Finite Difference Calculations and Emprical Greens Functions

    NASA Astrophysics Data System (ADS)

    Gok, R.; Kalafat, D.; Hutchings, L.

    2003-12-01

    We analyze over 3,500 aftershocks recorded by several seismic networks during the 1999 Marmara, Turkey earthquakes. The analysis provides source parameters of the aftershocks, a three-dimensional velocity structure from tomographic inversion, an input three-dimensional velocity model for a finite difference wave propagation code (E3D, Larsen 1998), and records available for use as empirical Green's functions. Ultimately our goal is to model the 1999 earthquakes from DC to 25 Hz and study fault rupture mechanics and kinematic rupture models. We performed the simultaneous inversion for hypocenter locations and three-dimensional P- and S- wave velocity structure of Marmara Region using SIMULPS14 along with 2,500 events with more than eight P- readings and an azimuthal gap of less than 180\\deg. The resolution of calculated velocity structure is better in the eastern Marmara than the western Marmara region due to the dense ray coverage. We used the obtained velocity structure as input into the finite difference algorithm and validated the model by using M < 4 earthquakes as point sources and matching long period waveforms (f < 0.5 Hz). We also obtained Mo, fc and individual station kappa values for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquakes (M < 4.0) to obtain empirical Green's function (EGF) for the higher frequency range of ground motion synthesis (0.5 < f > 25 Hz). We additionally obtained the source scaling relation (energy-moment) of these aftershocks. We have generated several scenarios constrained by a priori knowledge of the Izmit and Duzce rupture parameters to validate our prediction capability.

  11. A parallel algorithm for 2D visco-acoustic frequency-domain full-waveform inversion: application to a dense OBS data set

    NASA Astrophysics Data System (ADS)

    Sourbier, F.; Operto, S.; Virieux, J.

    2006-12-01

    We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor computes the corresponding sub-domain of the gradient. In the end, the gradient is centralized on the master processor using a collective communation. The gradient is scaled by the diagonal elements of the Hessian matrix. This scaling is computed only once per frequency before the first iteration of the inversion. Estimation of the diagonal terms of the Hessian requires performing one simulation per non redondant shot and receiver position. The same strategy that the one used for the gradient is used to compute the diagonal Hessian in parallel. This algorithm was applied to a dense wide-angle data set recorded by 100 OBSs in the eastern Nankai trough, offshore Japan. Thirteen frequencies ranging from 3 and 15 Hz were inverted. Tweny iterations per frequency were computed leading to 260 tomographic velocity models of increasing resolution. The velocity model dimensions are 105 km x 25 km corresponding to a finite-difference grid of 4201 x 1001 grid with a 25-m grid interval. The number of shot was 1005 and the number of inverted OBS gathers was 93. The inversion requires 20 days on 6 32-bits bi-processor nodes with 4 Gbytes of RAM memory per node when only the LU factorization is performed in parallel. Preliminary estimations of the time required to perform the inversion with the fully-parallelized code is 6 and 4 days using 20 and 50 processors respectively.

  12. Crustal seismic structure beneath the Deccan Traps area (Gujarat, India), from local travel-time tomography

    NASA Astrophysics Data System (ADS)

    Prajapati, Srichand; Kukarina, Ekaterina; Mishra, Santosh

    2016-03-01

    The Gujarat region in western India is known for its intra-plate seismic activity, including the Mw 7.7 Bhuj earthquake, a reverse-faulting event that reactivated normal faults of the Mesozoic Kachchh rift zone. The Late Cretaceous Deccan Traps, one of the largest igneous provinces on the Earth, cover the southern part of Gujarat. This study is aimed at bringing light to the crustal rift zone structure and likely origin of the Traps based on the velocity structure of the crust beneath Gujarat. Tomographic inversion of the Gujarat region was done using the non-linear, passive-source tomographic algorithm, LOTOS. We use high-quality arrival times of 22,280 P and 22,040 S waves from 3555 events recorded from August 2006 to May 2011 at 83 permanent and temporary stations installed in Gujarat state by the Institute of Seismological Research (ISR). We conclude that the resulting high-velocity anomalies, which reach down to the Moho, are most likely related to intrusives associated with the Deccan Traps. Low velocity anomalies are found in sediment-filled Mesozoic rift basins and are related to weakened zones of faults and fracturing. A low-velocity anomaly in the north of the region coincides with the seismogenic zone of the reactivated Kachchh rift system, which is apparently associated with the channel of the outpouring of Deccan basalt.

  13. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  14. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  15. GPS water vapor project associated to the ESCOMPTE programme: description and first results of the field experiment

    NASA Astrophysics Data System (ADS)

    Bock, O.; Doerflinger, E.; Masson, F.; Walpersdorf, A.; Van-Baelen, J.; Tarniewicz, J.; Troller, M.; Somieski, A.; Geiger, A.; Bürki, B.

    A dense network of 17 dual frequency GPS receivers has been operated for two weeks during June 2001 within a 20 km × 20 km area around Marseille, France, as part of the ESCOMPTE field campaign ([Cros et al., 2004. The ESCOMPTE program: an overview. Atmos. Res. 69, 241-279]; http://medias.obs-mip.fr/escompte). The goal of this GPS experiment was to provide GPS data allowing for tomographic inversions and their validation within a well-documented observing period (the ESCOMPTE campaign). Simultaneous water vapor radiometer, solar spectrometer, Raman lidar and radiosonde data are used for comparison and validation. In this paper, we highlight the motivation, issues and describe the GPS field experiment. Some first results of integrated water vapor retrievals from GPS and the other sensing techniques are presented. The strategies for GPS data processing and tomographic inversions are discussed.

  16. East Pacific Rise axial structure from a joint tomographic inversion of traveltimes picked on downward continued and standard shot gathers collected by 3D MCS surveying

    NASA Astrophysics Data System (ADS)

    Newman, Kori; Nedimović, Mladen; Delescluse, Matthias; Menke, William; Canales, J. Pablo; Carbotte, Suzanne; Carton, Helene; Mutter, John

    2010-05-01

    We present traveltime tomographic models along closely spaced (~250 m), strike-parallel profiles that flank the axis of the East Pacific Rise at 9°41' - 9°57' N. The data were collected during a 3D (multi-streamer) multichannel seismic (MCS) survey of the ridge. Four 6-km long hydrophone streamers were towed by the ship along three along-axis sail lines, yielding twelve possible profiles over which to compute tomographic models. Based on the relative location between source-receiver midpoints and targeted subsurface structures, we have chosen to compute models for four of those lines. MCS data provide for a high density of seismic ray paths with which to constrain the model. Potentially, travel times for ~250,000 source-receiver pairs can be picked over the 30 km length of each model. However, such data density does not enhance the model resolution, so, for computational efficiency, the data are decimated so that ~15,000 picks per profile are used. Downward continuation of the shot gathers simulates an experimental geometry in which the sources and receivers are positioned just above the sea floor. This allows the shallowest sampling refracted arrivals to be picked and incorporated into the inversion whereas they would otherwise not be usable with traditional first-arrival travel-time tomographic techniques. Some of the far-offset deep-penetrating 2B refractions cannot be picked on the downward continued gathers due to signal processing artifacts. For this reason, we run a joint inversion by also including 2B traveltime picks from standard shot gathers. Uppermost velocity structure (seismic layer 2A thickness and velocity) is primarily constrained from 1D inversion of the nearest offset (<500 m) source-receiver travel-time picks for each downward continued shot gather. Deeper velocities are then computed in a joint 2D inversion that uses all picks from standard and downward continued shot gathers and incorporates the 1D results into the starting model. The resulting velocity models extend ~1 km into the crust. Preliminary results show thicker layer 2A and faster layer 2A velocities at fourth order ridge segment boundaries. Additionally, layer 2A thickens north of 9° 52' N, which is consistent with earlier investigations of this ridge segment. Slower layer 2B velocities are resolved in the vicinity of documented hydrothermal vent fields. We anticipate that additional analyses of the results will yield further insight into fine scale variations in near-axis mid-ocean ridge structure.

  17. Using artificial neural networks (ANN) for open-loop tomography

    NASA Astrophysics Data System (ADS)

    Osborn, James; De Cos Juez, Francisco Javier; Guzman, Dani; Butterley, Timothy; Myers, Richard; Guesalaga, Andres; Laine, Jesus

    2011-09-01

    The next generation of adaptive optics (AO) systems require tomographic techniques in order to correct for atmospheric turbulence along lines of sight separated from the guide stars. Multi-object adaptive optics (MOAO) is one such technique. Here, we present a method which uses an artificial neural network (ANN) to reconstruct the target phase given off-axis references sources. This method does not require any input of the turbulence profile and is therefore less susceptible to changing conditions than some existing methods. We compare our ANN method with a standard least squares type matrix multiplication method (MVM) in simulation and find that the tomographic error is similar to the MVM method. In changing conditions the tomographic error increases for MVM but remains constant with the ANN model and no large matrix inversions are required.

  18. Three-Dimensional Characterization of Buried Metallic Targets via a Tomographic Algorithm Applied to GPR Synthetic Data

    NASA Astrophysics Data System (ADS)

    Comite, Davide; Galli, Alessandro; Catapano, Ilaria; Soldovieri, Francesco; Pettinelli, Elena

    2013-04-01

    This work is focused on the three-dimensional (3-D) imaging of buried metallic targets achievable by processing GPR (ground penetrating radar) simulation data via a tomographic inversion algorithm. The direct scattering problem has been analysed by means of a recently-developed numerical setup based on an electromagnetic time-domain CAD tool (CST Microwave Studio), which enables us to efficiently explore different GPR scenarios of interest [1]. The investigated 3D domain considers here two media, representing, e.g., an air/soil environment in which variously-shaped metallic (PEC) scatterers can be buried. The GPR system is simulated with Tx/Rx antennas placed in a bistatic configuration at the soil interface. In the implementation, the characteristics of the antennas may suitably be chosen in terms of topology, offset, radiative features, frequency ranges, etc. Arbitrary time-domain waveforms can be used as the input GPR signal (e.g., a Gaussian-like pulse having the frequency spectrum in the microwave range). The gathered signal at the output port includes the backscattered wave from the objects to be reconstructed, and the relevant data may be displayed in canonical radargram forms [1]. The GPR system sweeps along one main rectilinear direction, and the scanning process is here repeated along different close parallel lines to acquire data for a full 3-D analysis. Starting from the processing of the synthetic GPR data, a microwave tomographic approach is used to tackle the imaging, which is based on the Kirchhoff approximation to linearize the inverse scattering problem [2]. The target reconstruction is given in terms of the amplitude of the 'object function' (normalized with respect to its maximum inside the 3-D investigation domain). The data of the scattered field are collected considering a multi-frequency step process inside the fixed range of the signal spectrum, under a multi-bistatic configuration where the Tx and Rx antennas are separated by an offset distance and move at the interface over rectilinear observation domains. Analyses have been performed for some canonical scatterer shapes (e.g., sphere and cylinder, cube and parallelepiped, cone and wedge) in order to specifically highlight the influence of all the three dimensions (length, depth, and width) in the reconstruction of the targets. The roles of both size and location of the objects are also addressed in terms of the probing signal wavelengths and of the antenna offset. The results show to what extent it is possible to achieve a correct spatial localization of the targets, in conjunction with a generally satisfactory prediction of their 3-D size and shape. It should anyway be noted that the tomographic reconstructions here manage challenging cases of non-penetrable objects with data gathered under a reflection configuration, hence most of the information achievable is expected relating to the upper illuminated parts of the reflectors that give rise to the main scattering effects. The limits in the identification of fine geometrical details are discussed further in connection with the critical aspects of GPR operation, which include the adopted detection configuration and the frequency spectrum of the employed signals. [1] G. Valerio, A. Galli, P. M. Barone, S. E. Lauro, E. Mattei, and E. Pettinelli, "GPR detectability of rocks in a Martian-like shallow subsoil: a numerical approach," Planet. Space Sci., Vol. 62, pp. 31-40, 2012. [2] R. Solimene, A. Buonanno, F. Soldovieri, and R. Pierri, "Physical optics imaging of 3D PEC objects: vector and multipolarized approaches," IEEE Trans. Geosci. Remote Sens., Vol. 48, pp. 1799-1808, Apr. 2010.

  19. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  20. Tomographic Imaging of a Forested Area By Airborne Multi-Baseline P-Band SAR.

    PubMed

    Frey, Othmar; Morsdorf, Felix; Meier, Erich

    2008-09-24

    In recent years, various attempts have been undertaken to obtain information about the structure of forested areas from multi-baseline synthetic aperture radar data. Tomographic processing of such data has been demonstrated for airborne L-band data but the quality of the focused tomographic images is limited by several factors. In particular, the common Fourierbased focusing methods are susceptible to irregular and sparse sampling, two problems, that are unavoidable in case of multi-pass, multi-baseline SAR data acquired by an airborne system. In this paper, a tomographic focusing method based on the time-domain back-projection algorithm is proposed, which maintains the geometric relationship between the original sensor positions and the imaged target and is therefore able to cope with irregular sampling without introducing any approximations with respect to the geometry. The tomographic focusing quality is assessed by analysing the impulse response of simulated point targets and an in-scene corner reflector. And, in particular, several tomographic slices of a volume representing a forested area are given. The respective P-band tomographic data set consisting of eleven flight tracks has been acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR).

  1. Time-dependent seismic tomography

    USGS Publications Warehouse

    Julian, B.R.; Foulger, G.R.

    2010-01-01

    Of methods for measuring temporal changes in seismic-wave speeds in the Earth, seismic tomography is among those that offer the highest spatial resolution. 3-D tomographic methods are commonly applied in this context by inverting seismic wave arrival time data sets from different epochs independently and assuming that differences in the derived structures represent real temporal variations. This assumption is dangerous because the results of independent inversions would differ even if the structure in the Earth did not change, due to observational errors and differences in the seismic ray distributions. The latter effect may be especially severe when data sets include earthquake swarms or aftershock sequences, and may produce the appearance of correlation between structural changes and seismicity when the wave speeds are actually temporally invariant. A better approach, which makes it possible to assess what changes are truly required by the data, is to invert multiple data sets simultaneously, minimizing the difference between models for different epochs as well as the rms arrival-time residuals. This problem leads, in the case of two epochs, to a system of normal equations whose order is twice as great as for a single epoch. The direct solution of this system would require twice as much memory and four times as much computational effort as would independent inversions. We present an algorithm, tomo4d, that takes advantage of the structure and sparseness of the system to obtain the solution with essentially no more effort than independent inversions require. No claim to original US government works Journal compilation ?? 2010 RAS.

  2. Limited data tomographic image reconstruction via dual formulation of total variation minimization

    NASA Astrophysics Data System (ADS)

    Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong

    2011-03-01

    The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raymund, T.D.

    Recently, several tomographic techniques for ionospheric electron density imaging have been proposed. These techniques reconstruct a vertical slice image of electron density using total electron content data. The data are measured between a low orbit beacon satellite and fixed receivers located along the projected orbital path of the satellite. By using such tomographic techniques, it may be possible to inexpensively (relative to incoherent scatter techniques) image the ionospheric electron density in a vertical plane several times per day. The satellite and receiver geometry used to measure the total electron content data causes the data to be incomplete; that is, themore » measured data do not contain enough information to completely specify the ionospheric electron density distribution in the region between the satellite and the receivers. A new algorithm is proposed which allows the incorporation of other complementary measurements, such as those from ionosondes, and also includes ways to include a priori information about the unknown electron density distribution in the reconstruction process. The algorithm makes use of two-dimensional basis functions. Illustrative application of this algorithm is made to simulated cases with good results. The technique is also applied to real total electron content (TEC) records collected in Scandinavia in conjunction with the EISCAT incoherent scatter radar. The tomographic reconstructions are compared with the incoherent scatter electron density images of the same region of the ionosphere.« less

  4. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  5. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  6. Sorting signed permutations by inversions in O(nlogn) time.

    PubMed

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  7. Proxies of oceanic Lithosphere/Asthenosphere Boundary from Global Seismic Anisotropy Tomography

    NASA Astrophysics Data System (ADS)

    Burgos, Gael; Montagner, Jean-Paul; Beucler, Eric; Trampert, Jeannot; Capdeville, Yann

    2013-04-01

    Surface waves provide essential information on the knowledge of the upper mantle global structure despite their low lateral resolution. This study, based on surface waves data, presents the development of a new anisotropic tomographic model of the upper mantle, a simplified isotropic model and the consequences of these results for the Lithosphere/Asthenosphere Boundary (LAB). As a first step, a large number of data is collected, these data are merged and regionalized in order to derive maps of phase and group velocity for the fundamental mode of Rayleigh and Love waves and their azimuthal dependence (maps of phase velocity are also obtained for the first six overtones). As a second step, a crustal a posteriori model is developped from the Monte-Carlo inversion of the shorter periods of the dataset, in order to take into account the effect of the shallow layers on the upper mantle. With the crustal model, a first Monte-Carlo inversion for the upper mantle structure is realized in a simplified isotropic parameterization to highlight the influence of the LAB properties on the surface waves data. Still using the crustal model, a first order perturbation theory inversion is performed in a fully anisotropic parameterization to build a 3-D tomographic model of the upper mantle (an extended model until the transition zone is also obtained by using the overtone data). Estimates of the LAB depth are derived from the upper mantle models and compared with the predictions of oceanic lithosphere cooling models. Seismic events are simulated using the Spectral Element Method in order to validate the ability of the anisotropic tomographic model of the upper mantle to re- produce observed seismograms.

  8. GPS water vapour tomography: preliminary results from the ESCOMPTE field experiment

    NASA Astrophysics Data System (ADS)

    Champollion, C.; Masson, F.; Bouin, M.-N.; Walpersdorf, A.; Doerflinger, E.; Bock, O.; Van Baelen, J.

    2005-03-01

    Water vapour plays a major role in atmospheric processes but remains difficult to quantify due to its high variability in time and space and the sparse set of available measurements. The GPS has proved its capacity to measure the integrated water vapour at zenith with the same accuracy as other methods. Recent studies show that it is possible to quantify the integrated water vapour in the line of sight of the GPS satellite. These observations can be used to study the 3D heterogeneity of the troposphere using tomographic techniques. We develop three-dimensional tomographic software to model the three-dimensional distribution of the tropospheric water vapour from GPS data. First, the tomographic software is validated by simulations based on the realistic ESCOMPTE GPS network configuration. Without a priori information, the absolute value of water vapour is less resolved as opposed to relative horizontal variations. During the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers was operated for 2 weeks within a 20×20-km area around Marseille (southern France). The network extends from sea level to the top of the Etoile chain (˜700 m high). Optimal results have been obtained with time windows of 30-min intervals and input data evaluation every 15 min. The optimal grid for the ESCOMTE geometrical configuration has a horizontal step size of 0.05°×0.05° and 500 m vertical step size. Second, we have compared the results of real data inversions with independent observations. Three inversions have been compared to three successive radiosonde launches and shown to be consistent. A good resolution compared to the a priori information is obtained up to heights of 3000 m. A humidity spike at 4000-m altitude remains unresolved. The reason is probably that the signal is spread homogeneously over the whole network and that such a feature is not resolvable by tomographic techniques. The results of our pure GPS inversion show a correlation with meteorological phenomena. Our measurements could be related to the land-sea breeze. Undoubtedly, tomography has some interesting potential for the water vapour cycle studies at small temporal and spatial scales.

  9. Real-time digital filtering, event triggering, and tomographic reconstruction of JET soft x-ray data (abstract)

    NASA Astrophysics Data System (ADS)

    Edwards, A. W.; Blackler, K.; Gill, R. D.; van der Goot, E.; Holm, J.

    1990-10-01

    Based upon the experience gained with the present soft x-ray data acquisition system, new techniques are being developed which make extensive use of digital signal processors (DSPs). Digital filters make 13 further frequencies available in real time from the input sampling frequency of 200 kHz. In parallel, various algorithms running on further DSPs generate triggers in response to a range of events in the plasma. The sawtooth crash can be detected, for example, with a delay of only 50 μs from the onset of the collapse. The trigger processor interacts with the digital filter boards to ensure data of the appropriate frequency is recorded throughout a plasma discharge. An independent link is used to pass 780 and 24 Hz filtered data to a network of transputers. A full tomographic inversion and display of the 24 Hz data is carried out in real time using this 15 transputer array. The 780 Hz data are stored for immediate detailed playback following the pulse. Such a system could considerably improve the quality of present plasma diagnostic data which is, in general, sampled at one fixed frequency throughout a discharge. Further, it should provide valuable information towards designing diagnostic data acquisition systems for future long pulse operation machines when a high degree of real-time processing will be required, while retaining the ability to detect, record, and analyze events of interest within such long plasma discharges.

  10. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?

    PubMed Central

    Pan, Xiaochuan; Sidky, Emil Y; Vannier, Michael

    2010-01-01

    Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues. PMID:20376330

  11. TOPICAL REVIEW: Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?

    NASA Astrophysics Data System (ADS)

    Pan, Xiaochuan; Sidky, Emil Y.; Vannier, Michael

    2009-12-01

    Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues.

  12. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  13. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  14. Tomographic diffractive microscopy with a wavefront sensor.

    PubMed

    Ruan, Y; Bon, P; Mudry, E; Maire, G; Chaumet, P C; Giovannini, H; Belkebir, K; Talneau, A; Wattellier, B; Monneret, S; Sentenac, A

    2012-05-15

    Tomographic diffractive microscopy is a recent imaging technique that reconstructs quantitatively the three-dimensional permittivity map of a sample with a resolution better than that of conventional wide-field microscopy. Its main drawbacks lie in the complexity of the setup and in the slowness of the image recording as both the amplitude and the phase of the field scattered by the sample need to be measured for hundreds of successive illumination angles. In this Letter, we show that, using a wavefront sensor, tomographic diffractive microscopy can be implemented easily on a conventional microscope. Moreover, the number of illuminations can be dramatically decreased if a constrained reconstruction algorithm is used to recover the sample map of permittivity.

  15. Tomographic and analog 3-D simulations using NORA. [Non-Overlapping Redundant Image Array formed by multiple pinholes

    NASA Technical Reports Server (NTRS)

    Yin, L. I.; Trombka, J. I.; Bielefeld, M. J.; Seltzer, S. M.

    1984-01-01

    The results of two computer simulations demonstrate the feasibility of using the nonoverlapping redundant array (NORA) to form three-dimensional images of objects with X-rays. Pinholes admit the X-rays to nonoverlapping points on a detector. The object is reconstructed in the analog mode by optical correlation and in the digital mode by tomographic computations. Trials were run with a stick-figure pyramid and extended objects with out-of-focus backgrounds. Substitution of spherical optical lenses for the pinholes increased the light transmission sufficiently that objects could be easily viewed in a dark room. Out-of-focus aberrations in tomographic reconstruction could be eliminated using Chang's (1976) algorithm.

  16. Clustering and interpretation of local earthquake tomography models in the southern Dead Sea basin

    NASA Astrophysics Data System (ADS)

    Bauer, Klaus; Braeuer, Benjamin

    2016-04-01

    The Dead Sea transform (DST) marks the boundary between the Arabian and the African plates. Ongoing left-lateral relative plate motion and strike-slip deformation started in the Early Miocene (20 MA) and produced a total shift of 107 km until presence. The Dead Sea basin (DSB) located in the central part of the DST is one of the largest pull-apart basins in the world. It was formed from step-over of different fault strands at a major segment boundary of the transform fault system. The basin development was accompanied by deposition of clastics and evaporites and subsequent salt diapirism. Ongoing deformation within the basin and activity of the boundary faults are indicated by increased seismicity. The internal architecture of the DSB and the crustal structure around the DST were subject of several large scientific projects carried out since 2000. Here we report on a local earthquake tomography study from the southern DSB. In 2006-2008, a dense seismic network consisting of 65 stations was operated for 18 months in the southern part of the DSB and surrounding regions. Altogether 530 well-constrained seismic events with 13,970 P- and 12,760 S-wave arrival times were used for a travel time inversion for Vp, Vp/Vs velocity structure and seismicity distribution. The work flow included 1D inversion, 2.5D and 3D tomography, and resolution analysis. We demonstrate a possible strategy how several tomographic models such as Vp, Vs and Vp/Vs can be integrated for a combined lithological interpretation. We analyzed the tomographic models derived by 2.5D inversion using neural network clustering techniques. The method allows us to identify major lithologies by their petrophysical signatures. Remapping the clusters into the subsurface reveals the distribution of basin sediments, prebasin sedimentary rocks, and crystalline basement. The DSB shows an asymmetric structure with thickness variation from 5 km in the west to 13 km in the east. Most importantly, a well-defined body under the eastern part of the basin down to 18 km depth was identified by the algorithm. Considering its geometry and petrophysical signature, this unit is interpreted as prebasin sediments and not as crystalline basement. The seismicity distribution supports our results, where events are concentrated along boundaries of the basin and the deep prebasin sedimentary body.

  17. Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.

    2008-12-01

    To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.

  18. Iterative Reconstruction of Volumetric Particle Distribution for 3D Velocimetry

    NASA Astrophysics Data System (ADS)

    Wieneke, Bernhard; Neal, Douglas

    2011-11-01

    A number of different volumetric flow measurement techniques exist for following the motion of illuminated particles. For experiments that have lower seeding densities, 3D-PTV uses recorded images from typically 3-4 cameras and then tracks the individual particles in space and time. This technique is effective in flows that have lower seeding densities. For flows that have a higher seeding density, tomographic PIV uses a tomographic reconstruction algorithm (e.g. MART) to reconstruct voxel intensities of the recorded volume followed by the cross-correlation of subvolumes to provide the instantaneous 3D vector fields on a regular grid. A new hybrid algorithm is presented which iteratively reconstructs the 3D-particle distribution directly using particles with certain imaging properties instead of voxels as base functions. It is shown with synthetic data that this method is capable of reconstructing densely seeded flows up to 0.05 particles per pixel (ppp) with the same or higher accuracy than 3D-PTV and tomographic PIV. Finally, this new method is validated using experimental data on a turbulent jet.

  19. Imaging sensor constellation for tomographic chemical cloud mapping.

    PubMed

    Cosofret, Bogdan R; Konno, Daisei; Faghfouri, Aram; Kindle, Harry S; Gittins, Christopher M; Finson, Michael L; Janov, Tracy E; Levreault, Mark J; Miyashiro, Rex K; Marinelli, William J

    2009-04-01

    A sensor constellation capable of determining the location and detailed concentration distribution of chemical warfare agent simulant clouds has been developed and demonstrated on government test ranges. The constellation is based on the use of standoff passive multispectral infrared imaging sensors to make column density measurements through the chemical cloud from two or more locations around its periphery. A computed tomography inversion method is employed to produce a 3D concentration profile of the cloud from the 2D line density measurements. We discuss the theoretical basis of the approach and present results of recent field experiments where controlled releases of chemical warfare agent simulants were simultaneously viewed by three chemical imaging sensors. Systematic investigations of the algorithm using synthetic data indicate that for complex functions, 3D reconstruction errors are less than 20% even in the case of a limited three-sensor measurement network. Field data results demonstrate the capability of the constellation to determine 3D concentration profiles that account for ~?86%? of the total known mass of material released.

  20. Body wave tomography of Iranian Plateau

    NASA Astrophysics Data System (ADS)

    Alinaghi, A.; Koulakov, I.; Thybo, H.

    2004-12-01

    The inverse teleseismic tomography approach has been adopted to study the P and S velocity structure of the crust and upper mantle across the Iranian Plateau. The method uses phase readings from earthquakes in a study area as reported by stations at teleseismic and regional distances to compute the velocity anomalies in the area. This use of source-receiver reciprocity allows tomographic studies of regions with sparse distribution of seismic stations, if only the region has sufficient seismicity. The input data for the algorithm are the arrival times of events located in Iran which were taken from the ISC catalogue (1964-1996). All the sources were located anew using a 1D spherical Earth model taking into account variable Moho depth and topography. The inversion provides relocation of events which is done simultaneously with calculation of velocity perturbations. With a series of synthetic tests we demonstrate the power of the algorithm to resolve both fancy and realistic anomalies using available earthquake sources and introducing measurement errors and outliers. The velocity anomalies show that the crust and upper mantle below the Iranian Plateau comprises a low velocity domain between the Arabian Plate and the Caspian Block, in agreement with models of the active Iranian plate trapped between the stable Turan plate in the north and the Arabian shield in the south. Our results show clear evidence of subduction at Makran in the southeastern corner of Iran where the oceanic crust of the Oman Sea subducts underneath the Iranian Plateau, a movement which is mainly aseismic. On the other hand, the subduction and collision of the two plates along the Zagros suture zone is highly seismic and in our images appear less consistent than the Makran region.

  1. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    NASA Astrophysics Data System (ADS)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  2. Novel edge treatment method for improving the transmission reconstruction quality in Tomographic Gamma Scanning.

    PubMed

    Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua

    2018-05-01

    Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. FIRST HIGH RESOLUTION 3D VELOCITY STRUCTURE OF THE VOLCANIC TENERIFE ISLAND (CANARY ISLANDS, SPAIN)

    NASA Astrophysics Data System (ADS)

    García-Yeguas, A.; Ibáñez, J.; Koulakov, I.; Sallares, V.

    2009-12-01

    A 3D detailed velocity model of the Tenerife Island has been obtained for first time using high resolution traveltime seismic tomography. Tenerife is a volcanic island (Canary Island, Spain) located in the Atlantic Ocean. In this island is situated the Teide stratovolcano (3718 m high) that is part of the Cañadas-Teide-Pico Viejo volcanic complex. Las Cañadas is a caldera system more than 20 kilometers wide where at least four distinct caldera processes have been identified.In January 2007, a seismic active experiment was carried out as part of the TOM-TEIDEVS project. 6850 air gun shots were fired on the sea and recorded on a dense local seismic land network consisting of 150 independent (three component) seismic stations. The good quality of the recorded data allowed identifying P-wave arrivals up to offsets of 30-40 km obtaining more than 63000 traveltimes used in the tomographic inversion. Two different codes were used in the tomographic inversion, FAST and ATOM_3D, to validate the final 3D velocity models. The main difference between them consists in the ray tracing methods used in the forward modeling, finite differences and ray bending algorithms, respectively. The velocity models show a very heterogeneous upper crust that is usual in similar volcanic environment. The tomographic images points out the no-existence of a magmatic chamber near to the surface. The ancient Las Cañadas caldera borders are clearly imaged featuring relatively high seismic velocity. Several resolution and accuracy test were carried out to quantify the reliability of the final velocity models. Checkerboard tests show that the well-resolved are located up to 6-8 km depth. We also carried out synthetic test in which we succesfully reproduce single anomalies observed in the velocity models.The uncertainties associated to the inverse problem were studied by means of a Monte Carlo-type analysis. The analysis proceeded inverting N random velocity models with random errors (velocity and traveltimes assuming the equiprobability of all of them). These tests assure the uniqueness of the first 3D velocity model that characterizes the internal structure of the Tenerife Island. As main conclusions of our work we can remark: a) This is the first 3-D velocity image of the area; b) we have observed low velocity anomalies near to surface that could be associated to the presence of magma, water reservoirs and volcanic landslides; c) high velocity anomalies could be related to ancient volcanic episodes or basement structures; d) our results could help to resolve many questions relate to the evolution of the volcanic system, as the presence or not of big landslides, calderic explosions or others; e) this image is a very important tool to improve the knowledge of the volcanic hazard, and therefore volcanic risk.

  4. Maximum likelihood bolometric tomography for the determination of the uncertainties in the radiation emission on JET TOKAMAK

    NASA Astrophysics Data System (ADS)

    Craciunescu, Teddy; Peluso, Emmanuele; Murari, Andrea; Gelfusa, Michela; JET Contributors

    2018-05-01

    The total emission of radiation is a crucial quantity to calculate the power balances and to understand the physics of any Tokamak. Bolometric systems are the main tool to measure this important physical quantity through quite sophisticated tomographic inversion methods. On the Joint European Torus, the coverage of the bolometric diagnostic, due to the availability of basically only two projection angles, is quite limited, rendering the inversion a very ill-posed mathematical problem. A new approach, based on the maximum likelihood, has therefore been developed and implemented to alleviate one of the major weaknesses of traditional tomographic techniques: the difficulty to determine routinely the confidence intervals in the results. The method has been validated by numerical simulations with phantoms to assess the quality of the results and to optimise the configuration of the parameters for the main types of emissivity encountered experimentally. The typical levels of statistical errors, which may significantly influence the quality of the reconstructions, have been identified. The systematic tests with phantoms indicate that the errors in the reconstructions are quite limited and their effect on the total radiated power remains well below 10%. A comparison with other approaches to the inversion and to the regularization has also been performed.

  5. A Methodology to Seperate and Analyze a Seismic Wide Angle Profile

    NASA Astrophysics Data System (ADS)

    Weinzierl, Wolfgang; Kopp, Heidrun

    2010-05-01

    General solutions of inverse problems can often be obtained through the introduction of probability distributions to sample the model space. We present a simple approach of defining an a priori space in a tomographic study and retrieve the velocity-depth posterior distribution by a Monte Carlo method. Utilizing a fitting routine designed for very low statistics to setup and analyze the obtained tomography results, it is possible to statistically separate the velocity-depth model space derived from the inversion of seismic refraction data. An example of a profile acquired in the Lesser Antilles subduction zone reveals the effectiveness of this approach. The resolution analysis of the structural heterogeneity includes a divergence analysis which proves to be capable of dissecting long wide-angle profiles for deep crust and upper mantle studies. The complete information of any parameterised physical system is contained in the a posteriori distribution. Methods for analyzing and displaying key properties of the a posteriori distributions of highly nonlinear inverse problems are therefore essential in the scope of any interpretation. From this study we infer several conclusions concerning the interpretation of the tomographic approach. By calculating a global as well as singular misfits of velocities we are able to map different geological units along a profile. Comparing velocity distributions with the result of a tomographic inversion along the profile we can mimic the subsurface structures in their extent and composition. The possibility of gaining a priori information for seismic refraction analysis by a simple solution to an inverse problem and subsequent resolution of structural heterogeneities through a divergence analysis is a new and simple way of defining a priori space and estimating the a posteriori mean and covariance in singular and general form. The major advantage of a Monte Carlo based approach in our case study is the obtained knowledge of velocity depth distributions. Certainly the decision of where to extract velocity information on the profile for setting up a Monte Carlo ensemble is limiting the a priori space. However, the general conclusion of analyzing the velocity field according to distinct reference distributions gives us the possibility to define the covariance according to any geological unit if we have a priori information on the velocity depth distributions. Using the wide angle data recorded across the Lesser Antilles arc, we are able to resolve a shallow feature like the backstop by a robust and simple divergence analysis. We demonstrate the effectiveness of the new methodology to extract some key features and properties from the inversion results by including information concerning the confidence level of results.

  6. Synthetic Incoherence via Scanned Gaussian Beams

    PubMed Central

    Levine, Zachary H.

    2006-01-01

    Tomography, in most formulations, requires an incoherent signal. For a conventional transmission electron microscope, the coherence of the beam often results in diffraction effects that limit the ability to perform a 3D reconstruction from a tilt series with conventional tomographic reconstruction algorithms. In this paper, an analytic solution is given to a scanned Gaussian beam, which reduces the beam coherence to be effectively incoherent for medium-size (of order 100 voxels thick) tomographic applications. The scanned Gaussian beam leads to more incoherence than hollow-cone illumination. PMID:27274945

  7. Radio-Tomographic Images of Post-midnight Equatorial Plasma Depletions

    NASA Astrophysics Data System (ADS)

    Hei, M. A.; Bernhardt, P. A.; Siefring, C. L.; Wilkens, M.; Huba, J. D.; Krall, J.; Valladares, C. E.; Heelis, R. A.; Hairston, M. R.; Coley, W. R.; Chau, J. L.

    2013-12-01

    For the first time, post-midnight equatorial plasma depletions (EPDs) have been imaged in the longitude-altitude plane using radio-tomography. High-resolution (~10 km × 10 km) electron-density reconstructions were created from total electron content (TEC) data using an array of receivers sited in Peru and the Multiplicative Algebraic Reconstruction Technique (MART) inversion algorithm. TEC data were obtained from the 150 and 400 MHz signals transmitted by the CERTO beacon on the C/NOFS satellite. In-situ electron density data from the C/NOFS CINDI instrument and electron density profiles from the UML Jicamarca ionosonde were used to generate an initial guess for the MART inversion, and also to constrain the inversion process. Observed EPDs had widths of 100-1000 km, spacings of 300-900 km, and often appeared 'pinched off' at the bottom. Well-developed EPDs appeared on an evening with a very small (4 m/s) Pre-Reversal-Enhancement (PRE), suggesting that postmidnight enhancements of the vertical plasma drift and/or seeding-induced uplifts (e.g. gravity waves) were responsible for driving the Rayleigh-Taylor Instability into the nonlinear regime on this night. On another night the Jicamarca ISR recorded postmidnight (~0230 LT) Eastward electric fields nearly twice as strong as the PRE fields seven hours earlier. These electric fields lifted the whole ionosphere, including embedded EPDs, over a longitude range ~14° wide. CINDI detected a dawn depletion in exactly the area where the reconstruction showed an uplifted EPD. Strong Equatorial Spread-F observed by the Jicamarca ionosonde during receiver observation times confirmed the presence of ionospheric irregularities.

  8. Robust statistical reconstruction for charged particle tomography

    DOEpatents

    Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W

    2013-10-08

    Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.

  9. 2D first break tomographic processing of data measured for celebration profiles: CEL01, CEL04, CEL05, CEL06, CEL09, CEL11

    NASA Astrophysics Data System (ADS)

    Bielik, M.; Vozar, J.; Hegedus, E.; Celebration Working Group

    2003-04-01

    The contribution informs about the preliminary results that relate to the first arrival p-wave seismic tomographic processing of data measured along the profiles CEL01, CEL04, CEL05, CEL06, CEL09 and CEL11. These profiles were measured in a framework of the seismic project called CELEBRATION 2000. Data acquisition and geometric parameters of the processed profiles, tomographic processing’s principle, particular processing steps and program parameters are described. Characteristic data (shot points, geophone points, total length of profiles, for all profiles, sampling, sensors and record lengths) of observation profiles are given. The fast program package developed by C. Zelt was applied for tomographic velocity inversion. This process consists of several steps. First step is a creation of the starting velocity field for which the calculated arrival times are modelled by the method of finite differences. The next step is minimization of differences between the measured and modelled arrival time till the deviation is small. Elimination of equivalency problem by including a priori information in the starting velocity field was done too. A priori information consists of the depth to the pre-Tertiary basement, estimation of its overlying sedimentary velocity from well-logging and or other seismic velocity data, etc. After checking the reciprocal times, pickings were corrected. The final result of the processing is a reliable travel time curve set considering the reciprocal times. We carried out picking of travel time curves, enhancement of signal-to-noise ratio on the seismograms using the program system of PROMAX. Tomographic inversion was carried out by so called 3D/2D procedure taking into account 3D wave propagation. It means that a corridor along the profile, which contains the outlying shot points and geophone points as well was defined and we carried out 3D processing within this corridor. The preliminary results indicate the seismic anomalous zones within the crust and the uppermost part of the upper mantle in the area consists of the Western Carpathians, the North European platform, the Pannonian basin and the Bohemian Massif.

  10. Research in Image Understanding as Applied to 3-D Microwave Tomographic Imaging with Near Optical Resolution.

    DTIC Science & Technology

    1987-03-01

    Oct. 1985. 28. D.L. Jaggard, K. Schultz, Y. Kim and P. Frangos , "Inverse Scattering for Dielectric Media", Annual OSA Meeting, Wash. D.C., Oct. 1985...T.H. Chu - Graduate Student (50%) C.Y. Ho - Graduate Student (50%) Y. Kim - Graduate Student (50%) K S. Lee - Graduate Student (50%) P. Frangos ...1982. 3. P. Frangos (Ph.D.) - "One-Dimensional Inverse Scattering: Exact Methods and Applications". 4. C.L. Werner (Ph.D.) - ŗ-D Imaging of Coherent and

  11. Joint 3-D tomographic imaging of Vp, Vs and Vp/Vs and hypocenter relocation at Sinabung volcano, Indonesia from November to December 2013

    USGS Publications Warehouse

    Nugraha, Andri Dian; Indrastuti, Novianti; Kusnandar, Ridwan; Gunawan, Hendra; McCausland, Wendy A.; Aulia, Atin Nur; Harlianti, Ulvienin

    2018-01-01

    We conducted travel time tomography using P- and S-wave arrival times of volcanic-tectonic (VT) events that occurred between November and December 2013 to determine the three-dimensional (3D) seismic velocity structure (Vp, Vs, and Vp/Vs) beneath Sinabung volcano, Indonesia in order to delineate geological subsurface structure and to enhance our understanding of the volcanism itself. This was a time period when phreatic explosions became phreatomagmatic and then magma migrated to the surface forming a summit lava dome. We used 4846 VT events with 16,138 P- and 16,138 S-wave arrival time phases recorded by 6 stations for the tomographic inversion. The relocated VTs collapse into three clusters at depths from the surface to sea level, from 2 to 4 km below sea level, and from 5 to 8.5 km below sea level. The tomographic inversion results show three prominent regions of high Vp/Vs (~ 1.8) beneath Sinabung volcano at depths consistent with the relocated earthquake clusters. We interpret these anomalies as intrusives associated with previous eruptions and possibly surrounding the magma conduit, which we cannot resolve with this study. One anomalous region might contain partial melt, at sea level and below the eventual eruption site at the summit. Our results are important for the interpretation of a conceptual model of the “plumbing system” of this hazardous volcano.

  12. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  13. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  14. Comparison of magmatic and amagmatic rift zone kinematics using full moment tensor inversions of regional earthquakes

    NASA Astrophysics Data System (ADS)

    Jaye Oliva, Sarah; Ebinger, Cynthia; Shillington, Donna; Albaric, Julie; Deschamps, Anne; Keir, Derek; Drooff, Connor

    2017-04-01

    Temporary seismic networks deployed in the magmatic Eastern rift and the mostly amagmatic Western rift in East Africa present the opportunity to compare the depth distribution of strain, and fault kinematics in light of rift age and the presence or absence of surface magmatism. The largest events in local earthquake catalogs (ML > 3.5) are modeled using the Dreger and Ford full moment tensor algorithm (Dreger, 2003; Minson & Dreger, 2008) to better constrain source depth and to investigate non-double-couple components. A bandpass filter of 0.02 to 0.10 Hz is applied to the waveforms prior to inversion. Synthetics are based on 1D velocity models derived during seismic analysis and constrained by reflection and tomographic data where available. Results show significant compensated linear vector dipole (CLVD) and isotropic components for earthquakes in magmatic rift zones, whereas double-couple mechanisms predominate in weakly magmatic rift sectors. We interpret the isotropic components as evidence for fluid-involved faulting in the Eastern rift where volatile emissions are large, and dike intrusions well documented. Lower crustal earthquakes are found in both amagmatic and magmatic sectors. These results are discussed in the context of the growing database of complementary geophysical, geochemical, and geological studies in these regions as we seek to understand the role of magmatism and faulting in accommodating strain during early continental rifting.

  15. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  16. Upper mantle seismic structure beneath southwest Africa from finite-frequency P- and S-wave tomography

    NASA Astrophysics Data System (ADS)

    Youssof, Mohammad; Yuan, Xiaohui; Tilmann, Frederik; Heit, Benjamin; Weber, Michael; Jokat, Wilfried; Geissler, Wolfram; Laske, Gabi; Eken, Tuna; Lushetile, Bufelo

    2015-04-01

    We present a 3D high-resolution seismic model of the southwestern Africa region from teleseismic tomographic inversion of the P- and S- wave data recorded by the amphibious WALPASS network. We used 40 temporary stations in southwestern Africa with records for a period of 2 years (the OBS operated for 1 year), between November 2010 and November 2012. The array covers a surface area of approximately 600 by 1200 km and is located at the intersection of the Walvis Ridge, the continental margin of northern Namibia, and extends into the Congo craton. Major questions that need to be understood are related to the impact of asthenosphere-lithosphere interaction, (plume-related features), on the continental areas and the evolution of the continent-ocean transition that followed the break-up of Gondwana. This process is supposed to leave its imprint as distinct seismic signature in the upper mantle. Utilizing 3D sensitivity kernels, we invert traveltime residuals to image velocity perturbations in the upper mantle down to 1000 km depth. To test the robustness of our tomographic image we employed various resolution tests which allow us to evaluate the extent of smearing effects and help defining the optimum inversion parameters (i.e., damping and smoothness) used during the regularization of inversion process. Resolution assessment procedure includes also a detailed investigation of the effect of the crustal corrections on the final images, which strongly influenced the resolution for the mantle structures. We present detailed tomographic images of the oceanic and continental lithosphere beneath the study area. The fast lithospheric keel of the Congo Craton reaches a depth of ~250 km. Relatively low velocity perturbations have been imaged within the orogenic Damara Belt down to a depth of ~150 km, probably related to surficial suture zones and the presence of fertile material. A shallower depth extent of the lithospheric plate of ~100 km was observed beneath the ocean, consistent with plate-cooling models. In addition to tomographic images, the seismic anisotropy measurements within the upper mantle inferred from teleseismic shear waves indicate a predominant NE-SW orientation for most of the land stations. Current results indicate no evidence for a consistent signature of fossil plume.

  17. Review of inversion techniques using analysis of different tests

    NASA Astrophysics Data System (ADS)

    Smaglichenko, T. A.

    2012-04-01

    Tomographic techniques are tools, which estimate the Earth's deep interior by inverting seismic data. Reliability of visualization provides adequate understanding of geodynamic processes for prediction of natural hazard and protection of environment. This presentation focuses on two interrelated factors, which affect on the reliability namely: particularities of geophysical medium and strategy for choice of inversion method. Three main techniques are under review. First, the standard LSQR algorithm is derived directly by the Lanczos algebraic application. The Double Difference tomography widely incorporates this algorithm and its expansion. Next, the CSSA technique, or method of subtraction has been introduced into seismology by Nikolaev et al. in 1985. This method got farther development in 2003 (Smaglichenko et al.) as the coordinate method of possible directions, which has been already known in the theory of numerical methods. And finally, the new Differentiated Approach (DA) tomography that has been recently developed by the author for seismology and introduced into applied mathematics as the modification of Gaussian elimination. Different test models are presented by detecting various properties of the medium and having a value for the mining sector as well for prediction of seismic activity. They are: 1) checker-board resolution test; 2) the single anomalous block surrounded by an uniform zone; 3) the large-size structure; 4) the most complicated case, when the model consist of contrast layers and the observation response is equal zero value. The geometry of experiment for all models is given in the note of Leveque et al., 1993. It was assumed that errors in experimental data are in limits of pre-assigned accuracy. The testing showed that LSQR is effective, when the small-size structure (1) is retrieved, while CSSA works faster under reconstruction of the separated anomaly (2). The large-size structure (3) can be reconstructed applying DA, which uses both Lanczos's method and CSSA as composed parts of the inversion process. Difficulty of the model of contrast layers (4) can be overcome with a priori information that could allow the DA implementation. The testing leads us to the following conclusion. Careful analyze and weighted assumptions about characteristics of the being investigated medium should be done before start of data inversion. The choice of suitable technique will provide reliability of solution. Nevertheless, DA is preferred in the case of noisy and large data.

  18. Geoelectrical Tomography for landslide monitoring: state-of-the-art and future challenges.

    NASA Astrophysics Data System (ADS)

    Lapenna, V.; Perrone, A.; Piscitelli, S.

    2011-12-01

    Recently, novel algorithms for tomographic data inversion, robust models for describing the hydrogeophysical processes and new sensor networks for the field data acquisition have rapidly transformed the geoelectrical methods in a powerful and cost-effective tool for geo-hazard monitoring. These technological and methodological improvements disclose the way for a wide spectra of interesting and challenging applications in geo-hazards monitoring: reconstruction of landslide geometry; identification of fluid and gas uprising in volcanic areas; electrical imaging of seismic faults etc.. We briefly resume the current state-of-the-art of the geoelectrical methods in landslide monitoring and introduce new and emerging applications of the geoelectrical tomographic methods. An overview of the more interesting results obtained in different areas of Italian territory affected by wide and diffuse hydrogeological instability phenomena will be presented and discussed. We will focus the attention to some recent results obtained in the frame of national and international projects (Morfeo, Eurorisk/Preview, DORIS). One of the key challenges for the future will be the integration of active (Resistivity) and passive (Self-Potential) measurements for obtaining 2D, 3D and 4D (time-lapse) electrical tomographies able to follow the spatial and temporal dynamics of electrical parameters (i.e. resistivity, self-potential) inside the landslide body. The resistivity imaging can be applied for illuminating the sliding surfaces and for mapping the time-dependent changes of water content in vadose zones, while the Self Potential imaging could give a significant contribute for delineating the groundwater circulation patterns and to the early identification of triggering factors.

  19. Phillips-Tikhonov regularization with a priori information for neutron emission tomographic reconstruction on Joint European Torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bielecki, J.; Scholz, M.; Drozdowicz, K.

    A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less

  20. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  1. A Comparison of 3D3C Velocity Measurement Techniques

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2013-11-01

    The velocity measurement fidelity of several 3D3C PIV measurement techniques including tomographic PIV, synthetic aperture PIV, plenoptic PIV, defocusing PIV, and 3D PTV are compared in simulations. A physically realistic ray-tracing algorithm is used to generate synthetic images of a standard calibration grid and of illuminated particle fields advected by homogeneous isotropic turbulence. The simulated images for the tomographic, synthetic aperture, and plenoptic PIV cases are then used to create three-dimensional reconstructions upon which cross-correlations are performed to yield the measured velocity field. Particle tracking algorithms are applied to the images for the defocusing PIV and 3D PTV to directly yield the three-dimensional velocity field. In all cases the measured velocity fields are compared to one-another and to the true velocity field using several metrics.

  2. Ambient Noise Interferometry and Surface Wave Array Tomography: Promises and Problems

    NASA Astrophysics Data System (ADS)

    van der Hilst, R. D.; Yao, H.; de Hoop, M. V.; Campman, X.; Solna, K.

    2008-12-01

    In the late 1990ies most seismologists would have frowned at the possibility of doing high-resolution surface wave tomography with noise instead of with signal associated with ballistic source-receiver propagation. Some may still do, but surface wave tomography with Green's functions estimated through ambient noise interferometry ('sourceless tomography') has transformed from a curiosity into one of the (almost) standard tools for analysis of data from dense seismograph arrays. Indeed, spectacular applications of ambient noise surface wave tomography have recently been published. For example, application to data from arrays in SE Tibet revealed structures in the crust beneath the Tibetan plateau that could not be resolved by traditional tomography (Yao et al., GJI, 2006, 2008). While the approach is conceptually simple, in application the proverbial devil is in the detail. Full reconstruction of the Green's function requires that the wavefields used are diffusive and that ambient noise energy is evenly distributed in the spatial dimensions of interest. In the field, these conditions are not usually met, and (frequency dependent) non-uniformity of the noise sources may lead to incomplete reconstruction of the Green's function. Furthermore, ambient noise distributions can be time-dependent, and seasonal variations have been documented. Naive use of empirical Green's functions may produce (unknown) bias in the tomographic models. The degrading effect on EGFs of the directionality of noise distribution forms particular challenges for applications beyond isotropic surface wave inversions, such as inversions for (azimuthal) anisotropy and attempts to use higher modes (or body waves). Incomplete Green's function reconstruction can (probably) not be prevented, but it may be possible to reduce the problem and - at least - understand the degree of incomplete reconstruction and prevent it from degrading the tomographic model. We will present examples of Rayleigh wave inversions and discuss strategies to mitigate effects of incomplete Green's function reconstruction on tomographic images.

  3. Three-Dimensional P-wave Velocity Structure Beneath Long Valley Caldera, California, Using Local-Regional Double-Difference Tomography

    NASA Astrophysics Data System (ADS)

    Menendez, H. M.; Thurber, C. H.

    2011-12-01

    Eastern California's Long Valley Caldera (LVC) and the Mono-Inyo Crater volcanic systems have been active for the past ~3.6 million years. Long Valley is known to produce very large silicic eruptions, the last of which resulted in the formation of a 17 km by 32 km wide, east-west trending caldera. Relatively recent unrest began between 1978-1980 with five ML ≥ 5.7 non-double-couple (NDC) earthquakes and associated aftershock swarms. Similar shallow seismic swarms have continued south of the resurgent dome and beneath Mammoth Mountain, surrounding sites of increased CO2 gas emissions. Nearly two decades of increased volcanic activity led to the 1997 installation of a temporary three-component array of 69 seismometers. This network, deployed by the Durham University, the USGS, and Duke University, recorded over 4,000 high-frequency events from May to September. A local tomographic inversion of 283 events surrounding Mammoth Mountain yielded a velocity structure with low Vp and Vp/Vs anomalies at 2-3 km bsl beneath the resurgent dome and Casa Diablo hot springs. These anomalies were interpreted to be CO2 reservoirs (Foulger et al., 2003). Several teleseismic and regional tomography studies have also imaged low Vp anomalies beneath the caldera at ~5-15 km depth, interpreted to be the underlying magma reservoir (Dawson et al., 1990; Weiland et al., 1995; Thurber et al., 2009). This study aims to improve the resolution of the LVC regional velocity model by performing tomographic inversions using the local events from 1997 in conjunction with regional events recorded by the Northern California Seismic Network (NCSN) between 1980 and 2010 and available refraction data. Initial tomographic inversions reveal a low velocity zone at ~2 to 6 km depth beneath the caldera. This structure may simply represent the caldera fill. Further iterations and the incorporation of teleseismic data may better resolve the overall shape and size of the underlying magma reservoir.

  4. A high-throughput system for high-quality tomographic reconstruction of large datasets at Diamond Light Source

    PubMed Central

    Atwood, Robert C.; Bodey, Andrew J.; Price, Stephen W. T.; Basham, Mark; Drakopoulos, Michael

    2015-01-01

    Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an ‘orthogonal’ fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and ‘facility-independent’: it can run on standard cluster infrastructure at any institution. PMID:25939626

  5. Rayleigh-wave tomography of the Ontong-Java Plateau

    NASA Astrophysics Data System (ADS)

    Richardson, W. Philip; Okal, Emile A.; Van der Lee, Suzan

    2000-02-01

    The deep structure of the Ontong-Java Plateau (OJP) in the westcentral Pacific is investigated through a 2-year deployment of four PASSCAL seismic stations used in a passive tomographic experiment. Single-path inversions of 230 Rayleigh waveforms from 140 earthquakes mainly located in the Solomon Trench confirm the presence of an extremely thick crust, with an average depth to the Mohorovičić discontinuity of 33 km. The thickest crusts (38 km) are found in the southcentral part of the plateau, around 2°S, 157°E. Lesser values remaining much thicker than average oceanic crust (15-26 km) are found on either side of the main structure, suggesting that the OJP spills over into the Lyra Basin to the west. Such thick crustal structures are consistent with formation of the plateau at the Pacific-Phoenix ridge at 121 Ma, while its easternmost part may have formed later (90 Ma) on more mature lithosphere. Single-path inversions also reveal a strongly developed low-velocity zone at asthenospheric depths in the mantle. A three-dimensional tomographic inversion resolves a low-velocity root of the OJP extending as deep as 300 km, with shear velocity deficiencies of ˜5%, suggesting the presence of a keel, dragged along with the plateau as the latter moves as part of the drift of the Pacific plate over the mantle.

  6. Rayleigh wave nonlinear inversion based on the Firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou

    2014-06-01

    Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.

  7. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    NASA Astrophysics Data System (ADS)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  8. Image reconstruction of muon tomographic data using a density-based clustering method

    NASA Astrophysics Data System (ADS)

    Perry, Kimberly B.

    Muons are subatomic particles capable of reaching the Earth's surface before decaying. When these particles collide with an object that has a high atomic number (Z), their path of travel changes substantially. Tracking muon movement through shielded containers can indicate what types of materials lie inside. This thesis proposes using a density-based clustering algorithm called OPTICS to perform image reconstructions using muon tomographic data. The results show that this method is capable of detecting high-Z materials quickly, and can also produce detailed reconstructions with large amounts of data.

  9. Development of seismic tomography software for hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.

  10. A study on characterization of stratospheric aerosol and gas parameters with the spacecraft solar occultation experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1977-01-01

    Spacecraft remote sensing of stratospheric aerosol and ozone vertical profiles using the solar occultation experiment has been analyzed. A computer algorithm has been developed in which a two step inversion of the simulated data can be performed. The radiometric data are first inverted into a vertical extinction profile using a linear inversion algorithm. Then the multiwavelength extinction profiles are solved with a nonlinear least square algorithm to produce aerosol and ozone vertical profiles. Examples of inversion results are shown illustrating the resolution and noise sensitivity of the inversion algorithms.

  11. Microwave tomography for GPR data processing in archaeology and cultural heritages diagnostics

    NASA Astrophysics Data System (ADS)

    Soldovieri, F.

    2009-04-01

    Ground Penetrating Radar (GPR) is one of the most feasible and friendly instrumentation to detect buried remains and perform diagnostics of archaeological structures with the aim of detecting hidden objects (defects, voids, constructive typology; etc..). In fact, GPR technique allows to perform measurements over large areas in a very fast way thanks to a portable instrumentation. Despite of the widespread exploitation of the GPR as data acquisition system, many difficulties arise in processing GPR data so to obtain images reliable and easily interpretable by the end-users. This difficulty is exacerbated when no a priori information is available as for example arises in the case of historical heritages for which the knowledge of the constructive modalities and materials of the structure might be completely missed. A possible answer to the above cited difficulties resides in the development and the exploitation of microwave tomography algorithms [1, 2], based on more refined electromagnetic scattering model with respect to the ones usually adopted in the classic radaristic approach. By exploitation of the microwave tomographic approach, it is possible to gain accurate and reliable "images" of the investigated structure in order to detect, localize and possibly determine the extent and the geometrical features of the embedded objects. In this framework, the adoption of simplified models of the electromagnetic scattering appears very convenient for practical and theoretical reasons. First, the linear inversion algorithms are numerically efficient thus allowing to investigate domains large in terms of the probing wavelength in a quasi real- time also in the case of 3D case also by adopting schemes based on the combination of 2D reconstruction [3]. In addition, the solution approaches are very robust against the uncertainties in the parameters of the measurement configuration and on the investigated scenario. From a theoretical point of view, the linear models allow further advantages such as: the absence of the false solutions (a question to be arisen in non linear inverse problems); the exploitation of well known regularization tools for achieving a stable solution of the problem; the possibility to analyze the reconstruction performances of the algorithm once the measurement configuration and the properties of the host medium are known. Here, we will present the main features and the reconstruction results of a linear inversion algorithm based on the Born approximation in realistic applications in archaeology and cultural heritage diagnostics. Born model is useful when penetrable objects are under investigations. As well known, the Born Approximation is used to solve the forward problem, that is the determination of the scattered field from a known object under the hypothesis of weak scatterer, i.e. an object whose dielectric permittivity is slightly different from the one of the host medium and whose extent is small in term of probing wavelength. Differently, for the inverse scattering problem, the above hypotheses can be relaxed at the cost to renounce to a "quantitative reconstruction" of the object. In fact, as already shown by results in realistic conditions [4, 5], the adoption of a Born model inversion scheme allows to detect, to localize and to determine the geometry of the object also in the case of not weak scattering objects. [1] R. Persico, R. Bernini, F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the Born approximation", IEEE Trans. Antennas and Propagation, vol. 53, no.6, pp. 1875-1887, June 2005. [2] F. Soldovieri, J. Hugenschmidt, R. Persico and G. Leone, "A linear inverse scattering algorithm for realistic GPR applications", Near Surface Geophysics, vol. 5, no. 1, pp. 29-42, February 2007. [3] R. Solimene, F. Soldovieri, G. Prisco, R.Pierri, "Three-Dimensional Microwave Tomography by a 2-D Slice-Based Reconstruction Algorithm", IEEE Geoscience and Remote Sensing Letters, vol. 4, no. 4, pp. 556 - 560, Oct. 2007. [4] L. Orlando, F. Soldovieri, "Two different approaches for georadar data processing: a case study in archaeological prospecting", Journal of Applied Geophysics, vol. 64, pp. 1-13, March 2008. [5] F. Soldovieri, M. Bavusi, L. Crocco, S. Piscitelli, A. Giocoli, F. Vallianatos, S. Pantellis, A. Sarris, "A comparison between two GPR data processing techniques for fracture detection and characterization", Proc. of 70th EAGE Conference & Exhibition, Rome, Italy, 9 - 12 June 2008

  12. A user friendly interface for microwave tomography enhanced GPR surveys

    NASA Astrophysics Data System (ADS)

    Catapano, Ilaria; Affinito, Antonio; Soldovieri, Francesco

    2013-04-01

    Ground Penetrating Radar (GPR) systems are nowadays widely used in civil applications among which structural monitoring is one of the most critical issues due to its importance in terms of risks prevents and cost effective management of the structure itself. Despite GPR systems are assessed devices, there is a continuous interest towards their optimization, which involves both hardware and software aspects, with the common final goal to achieve accurate and highly informative images while keeping as low as possible difficulties and times involved in on field surveys. As far as data processing is concerned, one of the key aims is the development of imaging approaches capable of providing images easily interpretable by not expert users while keeping feasible the requirements in terms of computational resources. To satisfy this request or at least improve the reconstruction capabilities of data processing tools actually available in commercial GPR systems, microwave tomographic approaches based on the Born approximation have been developed and tested in several practical conditions, such as civil and archeological investigations, sub-service monitoring, security surveys and so on [1-3]. However, the adoption of these approaches is subjected to the involvement of expert workers, which have to be capable of properly managing the gathered data and their processing, which involves the solution of a linear inverse scattering problem. In order to overcome this drawback, aim of this contribution is to present an end-user friendly software interface that makes possible a simple management of the microwave tomographic approaches. In particular, the proposed interface allows us to upload both synthetic and experimental data sets saved in .txt, .dt and .dt1 formats, to perform all the steps needed to obtain tomographic images and to display raw-radargrams, intermediate and final results. By means of the interface, the users can apply time gating, back-ground removal or both to extract from the gathered data the meaningful signal, they can process the full set of the gathered A-scans or select a their portion as well as they can choose to account for an arbitrary time window inside that adopted during the measurement stage. Finally, the interface allows us to perform the imaging according to two different tomographic approaches, both modeling the scattering phenomenon according to the Born approximation and looking for cylindrical objects of arbitrary cross section (2D geometry) probed by an incident field polarized along the invariance axis (scalar case). One approach is based on the assumption that the scattering phenomenon arises in a homogeneous medium, while the second one accounts for the presence of a flat air-medium interface. REFERENCES [1] F. Soldovieri, J. Hugenschmidt, R. Persico and G. Leone, "A linear inverse scattering algorithm for realistic GPR applications, Near Surf. Geophys., vol. 5, pp.29-42, 2007. [2] R. Persico, F. Soldovieri, E. Utsi, "Microwave tomography for processing of GPR data at Ballachulish, J. Geophys. and Eng., vol.7, pp.164-173, 2010. [3] I. Catapano, L. Crocco R. Di Napoli, F. Soldovieri, A. Brancaccio, F. Pesando, A. Aiello, "Microwave tomography enhanced GPR surveys in Centaur's Domus, Regio VI of Pompeii, Italy", J. Geophys. Eng., vol.9, S92-S99, 2012.

  13. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  14. Applications of Electrical Impedance Tomography (EIT): A Short Review

    NASA Astrophysics Data System (ADS)

    Kanti Bera, Tushar

    2018-03-01

    Electrical Impedance Tomography (EIT) is a tomographic imaging method which solves an ill posed inverse problem using the boundary voltage-current data collected from the surface of the object under test. Though the spatial resolution is comparatively low compared to conventional tomographic imaging modalities, due to several advantages EIT has been studied for a number of applications such as medical imaging, material engineering, civil engineering, biotechnology, chemical engineering, MEMS and other fields of engineering and applied sciences. In this paper, the applications of EIT have been reviewed and presented as a short summary. The working principal, instrumentation and advantages are briefly discussed followed by a detail discussion on the applications of EIT technology in different areas of engineering, technology and applied sciences.

  15. Advanced Ultrasonic Tomograph of Children's Bones

    NASA Astrophysics Data System (ADS)

    Lasaygues, Philippe; Lefebvre, Jean-Pierre; Guillermin, Régine; Kaftandjian, Valérie; Berteau, Jean-Philippe; Pithioux, Martine; Petit, Philippe

    This study deals with the development of an experimental device for performing ultrasonic computed tomography (UCT) on bone in pediatric degrees. The children's bone tomographs obtained in this study, were based on the use of a multiplexed 2-D ring antenna (1 MHz and 3 MHz) designed for performing electronic and mechanical scanning. Although this approach is known to be a potentially valuable means of imaging objects with similar acoustical impedances, problems arise when quantitative images of more highly contrasted media such as bones are required. Various strategies and various mathematical procedures for modeling the wave propagation based on Born approximations have been developed at our laboratory, which are suitable for use with pediatric cases. Inversions of the experimental data obtained are presented.

  16. Interferometric tomography of fuel cells for monitoring membrane water content.

    PubMed

    Waller, Laura; Kim, Jungik; Shao-Horn, Yang; Barbastathis, George

    2009-08-17

    We have developed a system that uses two 1D interferometric phase projections for reconstruction of 2D water content changes over time in situ in a proton exchange membrane (PEM) fuel cell system. By modifying the filtered backprojection tomographic algorithm, we are able to incorporate a priori information about the object distribution into a fast reconstruction algorithm which is suitable for real-time monitoring.

  17. Tomographic iterative reconstruction of a passive scalar in a 3D turbulent flow

    NASA Astrophysics Data System (ADS)

    Pisso, Ignacio; Kylling, Arve; Cassiani, Massimo; Solveig Dinger, Anne; Stebel, Kerstin; Schmidbauer, Norbert; Stohl, Andreas

    2017-04-01

    Turbulence in stable planetary boundary layers often encountered in high latitudes influences the exchange fluxes of heat, momentum, water vapor and greenhouse gases between the Earth's surface and the atmosphere. In climate and meteorological models, such effects of turbulence need to be parameterized, ultimately based on experimental data. A novel experimental approach is being developed within the COMTESSA project in order to study turbulence statistics at high resolution. Using controlled tracer releases, high-resolution camera images and estimates of the background radiation, different tomographic algorithms can be applied in order to obtain time series of 3D representations of the scalar dispersion. In this preliminary work, using synthetic data, we investigate different reconstruction algorithms with emphasis on algebraic methods. We study the dependence of the reconstruction quality on the discretization resolution and the geometry of the experimental device in both 2 and 3-D cases. We assess the computational aspects of the iterative algorithms focusing of the phenomenon of semi-convergence applying a variety of stopping rules. We discuss different strategies for error reduction and regularization of the ill-posed problem.

  18. Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Ryan, J.

    2010-12-01

    Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a realistic number of stations and events needed to provide the data.

  19. New Insights into Tectonics of the Saint Elias, Alaska, Region Based on Local Seismicity and Tomography

    NASA Astrophysics Data System (ADS)

    Ruppert, N. A.; Zabelina, I.; Freymueller, J. T.

    2013-12-01

    Saint Elias Mountains in southern Alaska are manifestation of ongoing tectonic processes that include collision of the Yakutat block with and subduction of the Yakutat block and Pacific plate under the North American plate. Interaction of these tectonic blocks and plates is complex and not well understood. In 2005 and 2006 a network of 22 broadband seismic sites was installed in the region as part of the SainT Elias TEctonics and Erosion Project (STEEP), a five-year multi-disciplinary study that addressed evolution of the highest coastal mountain range on Earth. High quality seismic data provides unique insights into earthquake occurrence and velocity structure of the region. Local earthquake data recorded between 2005 and 2010 became a foundation for detailed study of seismotectonic features and crustal velocities. The highest concentration of seismicity follows the Chugach-St.Elias fault, a major on land tectonic structure in the region. This fault is also delineated in tomographic images as a distinct contrast between lower velocities to the south and higher velocities to the north. The low-velocity region corresponds to the rapidly-uplifted and exhumed sediments on the south side of the range. Earthquake source parameters indicate high degree of compression and undertrusting processes along the coastal area, consistent with multiple thrust structures mapped from geological studies in the region. Tomographic inversion reveals velocity anomalies that correlate with sedimentary basins, volcanic features and subducting Yakutat block. We will present precise earthquake locations and source parameters recorded with the STEEP and regional seismic network along with the results of P- and S-wave tomographic inversion.

  20. Introducing minimum Fisher regularisation tomography to AXUV and soft x-ray diagnostic systems of the COMPASS tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mlynar, J.; Weinzettl, V.; Imrisek, M.

    2012-10-15

    The contribution focuses on plasma tomography via the minimum Fisher regularisation (MFR) algorithm applied on data from the recently commissioned tomographic diagnostics on the COMPASS tokamak. The MFR expertise is based on previous applications at Joint European Torus (JET), as exemplified in a new case study of the plasma position analyses based on JET soft x-ray (SXR) tomographic reconstruction. Subsequent application of the MFR algorithm on COMPASS data from cameras with absolute extreme ultraviolet (AXUV) photodiodes disclosed a peaked radiating region near the limiter. Moreover, its time evolution indicates transient plasma edge cooling following a radial plasma shift. In themore » SXR data, MFR demonstrated that a high resolution plasma positioning independent of the magnetic diagnostics would be possible provided that a proper calibration of the cameras on an x-ray source is undertaken.« less

  1. Comparison between GPR measurements and ultrasonic tomography with different inversion algorithms: an application to the base of an ancient Egyptian sculpture

    NASA Astrophysics Data System (ADS)

    Sambuelli, L.; Bohm, G.; Capizzi, P.; Cardarelli, E.; Cosentino, P.

    2011-09-01

    By late 2008 one of the most important pieces of the 'Museo delle Antichità Egizie' of Turin, the sculpture of the Pharaoh with god Amun, was planned to be one of the masterpieces of a travelling exhibition in Japan. The 'Fondazione Museo delle Antichità Egizie di Torino', who manages the museum, was concerned with the integrity of the base of the statue which actually presents visible signs of restoration dating back to the early 19th century. It was required to estimate the persistence of the visible fractures, to search for unknown ones and to provide information about the overall mechanical strength of the base. To tackle the first question a GPR reflection survey along three sides of the base was performed and the results were assembled in a 3D rendering. As far as the second question is concerned, two parallel, horizontal ultrasonic 2D tomograms across the base were made. We acquired, for each section, 723 ultrasonic signals corresponding to different transmitter and receiver positions. The tomographic data were inverted using four different software packages based upon different algorithms. The obtained velocity images were then compared each other, with the GPR results and with the visible fractures in the base. A critical analysis of the comparisons is finally presented.

  2. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm.

    PubMed

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim

    2017-06-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.

  3. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm

    PubMed Central

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim

    2018-01-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035

  4. Applications of hybrid genetic algorithms in seismic tomography

    NASA Astrophysics Data System (ADS)

    Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos

    2011-11-01

    Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.

  5. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  6. Optimal design of focused experiments and surveys

    NASA Astrophysics Data System (ADS)

    Curtis, Andrew

    1999-10-01

    Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.

  7. An endoscopic diffuse optical tomographic method with high resolution based on the improved FOCUSS method

    NASA Astrophysics Data System (ADS)

    Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei

    2017-02-01

    Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.

  8. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.

  9. A Multi-scale Finite-frequency Approach to the Inversion of Reciprocal Travel Times for 3-D Velocity Structure beneath Taiwan

    NASA Astrophysics Data System (ADS)

    Chang, Y.; Hung, S.; Kuo, B.; Kuochen, H.

    2012-12-01

    Taiwan is one of the archetypical places for studying the active orogenic process in the world, where the Luzon arc has obliquely collided into the southwest China continental margin since 5 Ma ago. Because of the lack of convincing evidence for the structure in the lithospheric mantle and at even greater depths, several competing models have been proposed for the Taiwan mountain-building process. With the deployment of ocean-bottom seismometers (OBSs) on the seafloor around Taiwan from the TAIGER (TAiwan Integrated GEodynamic Research) and IES seismic experiments, the aperture of the seismic network is greatly extended to improve the depth resolution of tomographic imaging, which is critical to illuminate the nature of the arc-continent collision and accretion in Taiwan. In this study, we use relative travel-time residuals between a collection of teleseismic body wave arrivals to tomographically image the velocity structure beneath Taiwan. In addition to those from common distant earthquakes observed across an array of stations, we take advantage of dense seismicity in the vicinity of Taiwan and the source and receiver reciprocity to augment the data coverage from clustered earthquakes recorded by global stations. As waveforms are dependent of source mechanisms, we carry out the cluster analysis to group the phase arrivals with similar waveforms into clusters and simultaneously determine relative travel-time anomalies in the same cluster accurately by a cross correlation method. The combination of these two datasets would particularly enhance the resolvability of the tomographic models offshore of eastern Taiwan, where the two subduction systems of opposite polarity are taking place and have primarily shaped the present tectonic framework of Taiwan. On the other hand, our inversion adopts an innovation that invokes wavelet-based, multi-scale parameterization and finite-frequency theory. Not only does this approach make full use of frequency-dependent travel-time data providing different, but complementary sensitivity to velocity heterogeneity, but it also objectively addresses the intrinsically multi-scale characters of unevenly distributed data which yields the model with spatially-varying, data-adaptive resolution. Besides, we employ a parallelized singular value decomposition algorithm to directly solve for the resolution matrix and point spread functions (PSF). While the spatial distribution of a PSF is considered as the probability density function of multivariate normal distribution, we employ the principal component analysis (PCA) to estimate the lengths and directions of the principal axes of the PSF distribution, used for quantitative assessment of the resolvable scale-length and degree of smearing of the model and guidance to interpret the robust and trustworthy features in the resolved models.

  10. 3D tomographic imaging with the γ-eye planar scintigraphic gamma camera

    NASA Astrophysics Data System (ADS)

    Tunnicliffe, H.; Georgiou, M.; Loudos, G. K.; Simcox, A.; Tsoumpas, C.

    2017-11-01

    γ-eye is a desktop planar scintigraphic gamma camera (100 mm × 50 mm field of view) designed by BET Solutions as an affordable tool for dynamic, whole body, small-animal imaging. This investigation tests the viability of using γ-eye for the collection of tomographic data for 3D SPECT reconstruction. Two software packages, QSPECT and STIR (software for tomographic image reconstruction), have been compared. Reconstructions have been performed using QSPECT’s implementation of the OSEM algorithm and STIR’s OSMAPOSL (Ordered Subset Maximum A Posteriori One Step Late) and OSSPS (Ordered Subsets Separable Paraboloidal Surrogate) algorithms. Reconstructed images of phantom and mouse data have been assessed in terms of spatial resolution, sensitivity to varying activity levels and uniformity. The effect of varying the number of iterations, the voxel size (1.25 mm default voxel size reduced to 0.625 mm and 0.3125 mm), the point spread function correction and the weight of prior terms were explored. While QSPECT demonstrated faster reconstructions, STIR outperformed it in terms of resolution (as low as 1 mm versus 3 mm), particularly when smaller voxel sizes were used, and in terms of uniformity, particularly when prior terms were used. Little difference in terms of sensitivity was seen throughout.

  11. GPS tomographic experiment on water vapour dynamics in the troposphere over Lisbon

    NASA Astrophysics Data System (ADS)

    Benevides, Pedro; Catalao, Joao; Miranda, Pedro

    2015-04-01

    Quantification of the water vapour variability on the atmosphere remains a difficult task, affecting the weather prediction. Coarse water vapour resolution measurements in space and time affect the numerical weather prediction solution models causing artifacts in the prediction of severe weather phenomena. The GNSS atmospheric processing has been developed in the past years providing integrated water vapour estimates comparable with the meteorological sensor measurements, with studies registering 1 to 2 kg/m2 bias, but lack a vertical determination of the atmospheric processes. The GNSS tomography in the troposphere is one of the most promising techniques for sensing the three-dimensional water vapour state of the atmosphere. The determination of the integrated water vapour profile by means of the widely accepted GNSS meteorology techniques, allows the reconstruction of several slant path delay rays in the satellite line of view, providing an opportunity to sense the troposphere at tree-dimensions plus time. The tomographic system can estimate an image solution of the water vapour but impositions have to be introduced to the system of equations inversion because of the non-optimal GNSS observation geometry. Application of this technique on atmospheric processes like large convective precipitation or mesoscale water vapour circulation have been able to describe its local dynamic vertical variation. A 3D tomographic experiment was developed over an area of 60x60 km2 around Lisbon (Portugal). The GNSS network available composed by 9 receivers was used for an experiment of densification of the permanent network using 8 temporarily installed GPS receivers (totalling 17 stations). This study was performed during several weeks in July 2013, where a radiosonde campaign was also held in order to validate the tomographic inversion solution. 2D integrated water vapour maps directly obtained from the GNSS processing were also evaluated and local coastal breeze circulation patterns were identified. Preliminary results show good agreement between radiosonde vertical profiles of water vapour and the correspondent grid columnar profile of the tomographic solution. This study aims for a preliminary characterization of the 3D water vapour field over this region, investigating its potential for monitor small scale air circulation on coastal areas like sea breeze meteorological phenomenon. This study was funded by the Portuguese Science Foundation FCT, under project SMOG PTDC/CTE-ATM/119922/2010 and PhD grant SFRH/BD/80288/2011.

  12. Efficient volumetric estimation from plenoptic data

    NASA Astrophysics Data System (ADS)

    Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.

    2013-03-01

    The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.

  13. The auroral 6300 A emission - Observations and modeling

    NASA Technical Reports Server (NTRS)

    Solomon, Stanley C.; Hays, Paul B.; Abreu, Vincent J.

    1988-01-01

    A tomographic inversion is used to analyze measurements of the auroral atomic oxygen emission line at 6300 A made by the atmosphere explorer visible airglow experiment. A comparison is made between emission altitude profiles and the results from an electron transport and chemical reaction model. Measurements of the energetic electron flux, neutral composition, ion composition, and electron density are incorporated in the model.

  14. Broadband Tomography System: Direct Time-Space Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    Biagi, E.; Capineri, Lorenzo; Castellini, Guido; Masotti, Leonardo F.; Rocchi, Santina

    1989-10-01

    In this paper a new ultrasound tomographic image algorithm is presented. A complete laboratory system is built up to test the algorithm in experimental conditions. The proposed system is based on a physical model consisting of a bidimensional distribution of single scattering elements. Multiple scattering is neglected, so Born approximation is assumed. This tomographic technique only requires two orthogonal scanning sections. For each rotational position of the object, data are collected by means of the complete data set method in transmission mode. After a numeric envelope detection, the received signals are back-projected in the space-domain through a scalar function. The reconstruction of each scattering element is accomplished by correlating the ultrasound time of flight and attenuation with the points' loci given by the possible positions of the scattering element. The points' locus is represented by an ellipse with the focuses located on the transmitter and receiver positions. In the image matrix the ellipses' contributions are coherently summed in the position of the scattering element. Computer simulations of cylindrical-shaped objects have pointed out the performances of the reconstruction algorithm. Preliminary experimental results show the laboratory system features. On the basis of these results an experimental procedure to test the confidence and repeatability of ultrasonic measurements on human carotid vessel is proposed.

  15. Explosive Events in the Quiet Sun: Extreme Ultraviolet Imaging Spectroscopy Instrumentation and Observations

    NASA Astrophysics Data System (ADS)

    Rust, Thomas Ludwell

    Explosive event is the name given to slit spectrograph observations of high spectroscopic velocities in solar transition region spectral lines. Explosive events show much variety that cannot yet be explained by a single theory. It is commonly believed that explosive events are powered by magnetic reconnection. The evolution of the line core appears to be an important indicator of which particular reconnection process is at work. The Multi-Order Solar Extreme Ultraviolet Spectrograph (MOSES) is a novel slitless spectrograph designed for imaging spectroscopy of solar extreme ultraviolet (EUV) spectral lines. The spectrograph design forgoes a slit and images instead at three spectral orders of a concave grating. The images are formed simultaneously so the resulting spatial and spectral information is co-temporal over the 20' x 10' instrument field of view. This is an advantage over slit spectrographs which build a field of view one narrow slit at a time. The cost of co-temporal imaging spectroscopy with the MOSES is increased data complexity relative to slit spectrograph data. The MOSES data must undergo tomographic inversion for recovery of line profiles. I use the unique data from the MOSES to study transition region explosive events in the He ii 304 A spectral line. I identify 41 examples of explosive events which include 5 blue shifted jets, 2 red shifted jets, and 10 bi-directional jets. Typical doppler speeds are approximately 100kms-1. I show the early development of one blue jet and one bi-directional jet and find no acceleration phase at the onset of the event. The bi-directional jets are interesting because they are predicted in models of Petschek reconnection in the transition region. I develop an inversion algorithm for the MOSES data and test it on synthetic observations of a bi-directional jet. The inversion is based on a multiplicative algebraic reconstruction technique (MART). The inversion successfully reproduces synthetic line profiles. I then use the inversion to study the time evolution of a bi-directional jet. The inverted line profiles show fast doppler shifted components and no measurable line core emission. The blue and red wings of the jet show increasing spatial separation with time.

  16. A review of ocean chlorophyll algorithms and primary production models

    NASA Astrophysics Data System (ADS)

    Li, Jingwen; Zhou, Song; Lv, Nan

    2015-12-01

    This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model.

  17. Feasibility of track-based multiple scattering tomography

    NASA Astrophysics Data System (ADS)

    Jansen, H.; Schütze, P.

    2018-04-01

    We present a tomographic technique making use of a gigaelectronvolt electron beam for the determination of the material budget distribution of centimeter-sized objects by means of simulations and measurements. In both cases, the trajectory of electrons traversing a sample under test is reconstructed using a pixel beam-telescope. The width of the deflection angle distribution of electrons undergoing multiple Coulomb scattering at the sample is estimated. Basing the sinogram on position-resolved estimators enables the reconstruction of the original sample using an inverse radon transform. We exemplify the feasibility of this tomographic technique via simulations of two structured cubes—made of aluminium and lead—and via an in-beam measured coaxial adapter. The simulations yield images with FWHM edge resolutions of (177 ± 13) μm and a contrast-to-noise ratio of 5.6 ± 0.2 (7.8 ± 0.3) for aluminium (lead) compared to air. The tomographic reconstruction of a coaxial adapter serves as experimental evidence of the technique and yields a contrast-to-noise ratio of 15.3 ± 1.0 and a FWHM edge resolution of (117 ± 4) μm.

  18. 1r2dinv: A finite-difference model for inverse analysis of two dimensional linear or radial groundwater flow

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Butler, J.J.

    2001-01-01

    We have developed a program for inverse analysis of two-dimensional linear or radial groundwater flow problems. The program, 1r2dinv, uses standard finite difference techniques to solve the groundwater flow equation for a horizontal or vertical plane with heterogeneous properties. In radial mode, the program simulates flow to a well in a vertical plane, transforming the radial flow equation into an equivalent problem in Cartesian coordinates. The physical parameters in the model are horizontal or x-direction hydraulic conductivity, anisotropy ratio (vertical to horizontal conductivity in a vertical model, y-direction to x-direction in a horizontal model), and specific storage. The program allows the user to specify arbitrary and independent zonations of these three parameters and also to specify which zonal parameter values are known and which are unknown. The Levenberg-Marquardt algorithm is used to estimate parameters from observed head values. Particularly powerful features of the program are the ability to perform simultaneous analysis of heads from different tests and the inclusion of the wellbore in the radial mode. These capabilities allow the program to be used for analysis of suites of well tests, such as multilevel slug tests or pumping tests in a tomographic format. The combination of information from tests stressing different vertical levels in an aquifer provides the means for accurately estimating vertical variations in conductivity, a factor profoundly influencing contaminant transport in the subsurface. ?? 2001 Elsevier Science Ltd. All rights reserved.

  19. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  20. Improvements of Travel-time Tomography Models from Joint Inversion of Multi-channel and Wide-angle Seismic Data

    NASA Astrophysics Data System (ADS)

    Begović, Slaven; Ranero, César; Sallarès, Valentí; Meléndez, Adrià; Grevemeyer, Ingo

    2016-04-01

    Commonly multichannel seismic reflection (MCS) and wide-angle seismic (WAS) data are modeled and interpreted with different approaches. Conventional travel-time tomography models using solely WAS data lack the resolution to define the model properties and, particularly, the geometry of geologic boundaries (reflectors) with the required accuracy, specially in the shallow complex upper geological layers. We plan to mitigate this issue by combining these two different data sets, specifically taking advantage of the high redundancy of multichannel seismic (MCS) data, integrated with wide-angle seismic (WAS) data into a common inversion scheme to obtain higher-resolution velocity models (Vp), decrease Vp uncertainty and improve the geometry of reflectors. To do so, we have adapted the tomo2d and tomo3d joint refraction and reflection travel time tomography codes (Korenaga et al, 2000; Meléndez et al, 2015) to deal with streamer data and MCS acquisition geometries. The scheme results in a joint travel-time tomographic inversion based on integrated travel-time information from refracted and reflected phases from WAS data and reflected identified in the MCS common depth point (CDP) or shot gathers. To illustrate the advantages of a common inversion approach we have compared the modeling results for synthetic data sets using two different travel-time inversion strategies: We have produced seismic velocity models and reflector geometries following typical refraction and reflection travel-time tomographic strategy modeling just WAS data with a typical acquisition geometry (one OBS each 10 km). Second, we performed joint inversion of two types of seismic data sets, integrating two coincident data sets consisting of MCS data collected with a 8 km-long streamer and the WAS data into a common inversion scheme. Our synthetic results of the joint inversion indicate a 5-10 times smaller ray travel-time misfit in the deeper parts of the model, compared to models obtained using just wide-angle seismic data. As expected, there is an important improvement in the definition of the reflector geometry, which in turn, allows to improve the accuracy of the velocity retrieval just above and below the reflector. To test the joint inversion approach with real data, we combined wide-angle (WAS) seismic and coincident multichannel seismic reflection (MCS) data acquired in the northern Chile subduction zone into a common inversion scheme to obtain a higher-resolution information of upper plate and inter-plate boundary.

  1. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  2. Procedures for Geometric Data Reduction in Solid Log Modelling

    Treesearch

    Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt

    1995-01-01

    One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.

  3. Tomographic capabilities of the new GEM based SXR diagnostic of WEST

    NASA Astrophysics Data System (ADS)

    Jardin, A.; Mazon, D.; O'Mullane, M.; Mlynar, J.; Loffelmann, V.; Imrisek, M.; Chernyshova, M.; Czarski, T.; Kasprowicz, G.; Wojenski, A.; Bourdelle, C.; Malard, P.

    2016-07-01

    The tokamak WEST (Tungsten Environment in Steady-State Tokamak) will start operating by the end of 2016 as a test bed for the ITER divertor components in long pulse operation. In this context, radiative cooling of heavy impurities like tungsten (W) in the Soft X-ray (SXR) range [0.1 keV; 20 keV] is a critical issue for the plasma core performances. Thus reliable tools are required to monitor the local impurity density and avoid W accumulation. The WEST SXR diagnostic will be equipped with two new GEM (Gas Electron Multiplier) based poloidal cameras allowing to perform 2D tomographic reconstructions in tunable energy bands. In this paper tomographic capabilities of the Minimum Fisher Information (MFI) algorithm developed for Tore Supra and upgraded for WEST are investigated, in particular through a set of emissivity phantoms and the standard WEST scenario including reconstruction errors, influence of noise as well as computational time.

  4. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2010-04-01

    OPTRA has developed an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize the design and build and detail system characterization and test of a prototype I-OP-FTIR instrument. System characterization includes radiometric performance and spectral resolution. Results from a series of tomographic reconstructions of sulfur hexafluoride plumes in a laboratory setting are also presented.

  5. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Engel, James R.; Vaillancourt, Robert; Todd, Lori; Mottus, Kathleen

    2008-04-01

    OPTRA and University of North Carolina are developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach will be considered as a candidate referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize progress to date and overall system performance projections based on the instrument, spectroscopy, and tomographic reconstruction accuracy. We then present a preliminary optical design of the I-OP-FTIR.

  6. Black hole algorithm for determining model parameter in self-potential data

    NASA Astrophysics Data System (ADS)

    Sungkono; Warnana, Dwa Desa

    2018-01-01

    Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.

  7. 3-D CSEM data inversion algorithm based on simultaneously active multiple transmitters concept

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin Kumar; Israil, Mohammad

    2017-05-01

    We present an algorithm for efficient 3-D inversion of marine controlled-source electromagnetic data. The efficiency is achieved by exploiting the redundancy in data. The data redundancy is reduced by compressing the data through stacking of the response of transmitters which are in close proximity. This stacking is equivalent to synthesizing the data as if the multiple transmitters are simultaneously active. The redundancy in data, arising due to close transmitter spacing, has been studied through singular value analysis of the Jacobian formed in 1-D inversion. This study reveals that the transmitter spacing of 100 m, typically used in marine data acquisition, does result in redundancy in the data. In the proposed algorithm, the data are compressed through stacking which leads to both computational advantage and reduction in noise. The performance of the algorithm for noisy data is demonstrated through the studies on two types of noise, viz., uncorrelated additive noise and correlated non-additive noise. It is observed that in case of uncorrelated additive noise, up to a moderately high (10 percent) noise level the algorithm addresses the noise as effectively as the traditional full data inversion. However, when the noise level in the data is high (20 percent), the algorithm outperforms the traditional full data inversion in terms of data misfit. Similar results are obtained in case of correlated non-additive noise and the algorithm performs better if the level of noise is high. The inversion results of a real field data set are also presented to demonstrate the robustness of the algorithm. The significant computational advantage in all cases presented makes this algorithm a better choice.

  8. Optimization of the Inverse Algorithm for Estimating the Optical Properties of Biological Materials Using Spatially-resolved Diffuse Reflectance Technique

    USDA-ARS?s Scientific Manuscript database

    Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...

  9. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    PubMed

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  10. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  11. The Collaborative Seismic Earth Model Project

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Herwaarden, D. P.; Afanasiev, M.

    2017-12-01

    We present the first generation of the Collaborative Seismic Earth Model (CSEM). This effort is intended to address grand challenges in tomography that currently inhibit imaging the Earth's interior across the seismically accessible scales: [1] For decades to come, computational resources will remain insufficient for the exploitation of the full observable seismic bandwidth. [2] With the man power of individual research groups, only small fractions of available waveform data can be incorporated into seismic tomographies. [3] The limited incorporation of prior knowledge on 3D structure leads to slow progress and inefficient use of resources. The CSEM is a multi-scale model of global 3D Earth structure that evolves continuously through successive regional refinements. Taking the current state of the CSEM as initial model, these refinements are contributed by external collaborators, and used to advance the CSEM to the next state. This mode of operation allows the CSEM to [1] harness the distributed man and computing power of the community, [2] to make consistent use of prior knowledge, and [3] to combine different tomographic techniques, needed to cover the seismic data bandwidth. Furthermore, the CSEM has the potential to serve as a unified and accessible representation of tomographic Earth models. Generation 1 comprises around 15 regional tomographic refinements, computed with full-waveform inversion. These include continental-scale mantle models of North America, Australasia, Europe and the South Atlantic, as well as detailed regional models of the crust beneath the Iberian Peninsula and western Turkey. A global-scale full-waveform inversion ensures that regional refinements are consistent with whole-Earth structure. This first generation will serve as the basis for further automation and methodological improvements concerning validation and uncertainty quantification.

  12. 2D joint inversion of CSAMT and magnetic data based on cross-gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao

    2017-06-01

    A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.

  13. Tomographic retrievals of ozone with the OMPS Limb Profiler: algorithm description and preliminary results

    NASA Astrophysics Data System (ADS)

    Zawada, Daniel J.; Rieger, Landon A.; Bourassa, Adam E.; Degenstein, Douglas A.

    2018-04-01

    Measurements of limb-scattered sunlight from the Ozone Mapping and Profiler Suite Limb Profiler (OMPS-LP) can be used to obtain vertical profiles of ozone in the stratosphere. In this paper we describe a two-dimensional, or tomographic, retrieval algorithm for OMPS-LP where variations are retrieved simultaneously in altitude and the along-orbital-track dimension. The algorithm has been applied to measurements from the center slit for the full OMPS-LP mission to create the publicly available University of Saskatchewan (USask) OMPS-LP 2D v1.0.2 dataset. Tropical ozone anomalies are compared with measurements from the Microwave Limb Sounder (MLS), where differences are less than 5 % of the mean ozone value for the majority of the stratosphere. Examples of near-coincident measurements with MLS are also shown, and agreement at the 5 % level is observed for the majority of the stratosphere. Both simulated retrievals and coincident comparisons with MLS are shown at the edge of the polar vortex, comparing the results to a traditional one-dimensional retrieval. The one-dimensional retrieval is shown to consistently overestimate the amount of ozone in areas of large horizontal gradients relative to both MLS and the two-dimensional retrieval.

  14. Comparison of three methods of solution to the inverse problem of groundwater hydrology for multiple pumping stimulation

    NASA Astrophysics Data System (ADS)

    Giudici, Mauro; Casabianca, Davide; Comunian, Alessandro

    2015-04-01

    The basic classical inverse problem of groundwater hydrology aims at determining aquifer transmissivity (T ) from measurements of hydraulic head (h), estimates or measures of source terms and with the least possible knowledge on hydraulic transmissivity. The theory of inverse problems shows that this is an example of ill-posed problem, for which non-uniqueness and instability (or at least ill-conditioning) might preclude the computation of a physically acceptable solution. One of the methods to reduce the problems with non-uniqueness, ill-conditioning and instability is a tomographic approach, i.e., the use of data corresponding to independent flow situations. The latter might correspond to different hydraulic stimulations of the aquifer, i.e., to different pumping schedules and flux rates. Three inverse methods have been analyzed and tested to profit from the use of multiple sets of data: the Differential System Method (DSM), the Comparison Model Method (CMM) and the Double Constraint Method (DCM). DSM and CMM need h all over the domain and thus the first step for their application is the interpolation of measurements of h at sparse points. Moreover, they also need the knowledge of the source terms (aquifer recharge, well pumping rates) all over the aquifer. DSM is intrinsically based on the use of multiple data sets, which permit to write a first-order partial differential equation for T , whereas CMM and DCM were originally proposed to invert a single data set and have been extended to work with multiple data sets in this work. CMM and DCM are based on Darcy's law, which is used to update an initial guess of the T field with formulas based on a comparison of different hydraulic gradients. In particular, the CMM algorithm corrects the T estimate with ratio of the observed hydraulic gradient and that obtained with a comparison model which shares the same boundary conditions and source terms as the model to be calibrated, but a tentative T field. On the other hand the DCM algorithm applies the ratio of the hydraulic gradients obtained for two different forward models, one with the same boundary conditions and source terms as the model to be calibrated and the other one with prescribed head at the positions where in- or out-flow is known and h is measured. For DCM and CMM, multiple stimulation is used by updating the T field separately for each data set and then combining the resulting updated fields with different possible statistics (arithmetic, geometric or harmonic mean, median, least change, etc.). The three algorithms are tested and their characteristics and results are compared with a field data set, which was provided by prof. Fritz Stauffer (ETH) and corresponding to a pumping test in a thin alluvial aquifer in northern Switzerland. Three data sets are available and correspond to the undisturbed state, to the flow field created by a single pumping well and to the situation created by an 'hydraulic dipole', i.e., an extraction and an injection wells. These data sets permit to test the three inverse methods and the different options which can be chosen for their use.

  15. GPS Tomography: Water Vapour Monitoring for Germany

    NASA Astrophysics Data System (ADS)

    Bender, Michael; Dick, Galina; Wickert, Jens; Raabe, Armin

    2010-05-01

    Ground based GPS atmosphere sounding provides numerous atmospheric quantities with a high temporal resolution for all weather conditions. The spatial resolution of the GPS observations is mainly given by the number of GNSS satellites and GPS ground stations. The latter could considerably be increased in the last few years leading to more reliable and better resolved GPS products. New techniques such as the GPS water vapour tomography gain increased significance as data from large and dense GPS networks become available. The GPS tomography has the potential to provide spatially resolved fields of different quantities operationally, i. e. the humidity or wet refractivity as required for meteorological applications or the refraction index which is important for several space based observations or for precise positioning. The number of German GPS stations operationally processed by the GFZ in Potsdam was recently enlarged to more than 300. About 28000 IWV observations and more than 1.4 millions of slant total delay data are now available per day with a temporal resolution of 15 min and 2.5 min, respectively. The extended network leads not only to a higher spatial resolution of the tomographically reconstructed 3D fields but also to a much higher stability of the inversion process and with that to an increased quality of the results. Under these improved conditions the GPS tomography can operate continuously over several days or weeks without applying too tight constraints. Time series of tomographically reconstructed humidity fields will be shown and different initialisation strategies will be discussed: Initialisation with a simple exponential profile, with a 3D humidity field extrapolated from synoptic observations and with the result of the preceeding reconstruction. The results are compared to tomographic reconstructions initialised with COSMO-DE analyses and to the corresponding model fields. The inversion can be further stabilised by making use of independent adequately weighted observations, such as synoptic observations or IWV data. The impact of such observations on the quality of the tomographic reconstruction will be discussed together with different alternatives for weighting different types of observations.

  16. Active and passive electrical and seismic time-lapse monitoring of earthen embankments

    NASA Astrophysics Data System (ADS)

    Rittgers, Justin Bradley

    In this dissertation, I present research involving the application of active and passive geophysical data collection, data assimilation, and inverse modeling for the purpose of earthen embankment infrastructure assessment. Throughout the dissertation, I identify several data characteristics, and several challenges intrinsic to characterization and imaging of earthen embankments and anomalous seepage phenomena, from both a static and time-lapse geophysical monitoring perspective. I begin with the presentation of a field study conducted on a seeping earthen dam, involving static and independent inversions of active tomography data sets, and self-potential modeling of fluid flow within a confined aquifer. Additionally, I present results of active and passive time-lapse geophysical monitoring conducted during two meso-scale laboratory experiments involving the failure and self-healing of embankment filter materials via induced vertical cracking. Identified data signatures and trends, as well as 4D inversion results, are discussed as an underlying motivation for conducting subsequent research. Next, I present a new 4D acoustic emissions source localization algorithm that is applied to passive seismic monitoring data collected during a full-scale embankment failure test. Acoustic emissions localization results are then used to help spatially constrain 4D inversion of collocated self-potential monitoring data. I then turn to time-lapse joint inversion of active tomographic data sets applied to the characterization and monitoring of earthen embankments. Here, I develop a new technique for applying spatiotemporally varying structural joint inversion constraints. The new technique, referred to as Automatic Joint Constraints (AJC), is first demonstrated on a synthetic 2D joint model space, and is then applied to real geophysical monitoring data sets collected during a full-scale earthen embankment piping-failure test. Finally, I discuss some non-technical issues related to earthen embankment failures from a Science, Technology, Engineering, and Policy (STEP) perspective. Here, I discuss how the proclaimed scientific expertise and shifting of responsibility (Responsibilization) by governing entities tasked with operating and maintaining water storage and conveyance infrastructure throughout the United States tends to create barriers for 1) public voice and participation in relevant technical activities and outcomes, 2) meaningful discussions with the public and media during crisis communication, and 3) public perception of risk and the associated resilience of downhill communities.

  17. A model reduction approach to numerical inversion for a parabolic partial differential equation

    NASA Astrophysics Data System (ADS)

    Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail

    2014-12-01

    We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.

  18. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2009-05-01

    OPTRA is developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill.

  19. Investigating Gravity Waves in Polar Mesospheric Clouds Using Tomographic Reconstructions of AIM Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Hart, V. P.; Taylor, M. J.; Doyle, T. E.; Zhao, Y.; Pautet, P.-D.; Carruth, B. L.; Rusch, D. W.; Russell, J. M.

    2018-01-01

    This research presents the first application of tomographic techniques for investigating gravity wave structures in polar mesospheric clouds (PMCs) imaged by the Cloud Imaging and Particle Size instrument on the NASA AIM satellite. Albedo data comprising consecutive PMC scenes were used to tomographically reconstruct a 3-D layer using the Partially Constrained Algebraic Reconstruction Technique algorithm and a previously developed "fanning" technique. For this pilot study, a large region (760 × 148 km) of the PMC layer (altitude 83 km) was sampled with a 2 km horizontal resolution, and an intensity weighted centroid technique was developed to create novel 2-D surface maps, characterizing the individual gravity waves as well as their altitude variability. Spectral analysis of seven selected wave events observed during the Northern Hemisphere 2007 PMC season exhibited dominant horizontal wavelengths of 60-90 km, consistent with previous studies. These tomographic analyses have enabled a broad range of new investigations. For example, a clear spatial anticorrelation was observed between the PMC albedo and wave-induced altitude changes, with higher-albedo structures aligning well with wave troughs, while low-intensity regions aligned with wave crests. This result appears to be consistent with current theories of PMC development in the mesopause region. This new tomographic imaging technique also provides valuable wave amplitude information enabling further mesospheric gravity wave investigations, including quantitative analysis of their hemispheric and interannual characteristics and variations.

  20. NLSE: Parameter-Based Inversion Algorithm

    NASA Astrophysics Data System (ADS)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  1. 2D and 3D X-ray phase retrieval of multi-material objects using a single defocus distance.

    PubMed

    Beltran, M A; Paganin, D M; Uesugi, K; Kitchen, M J

    2010-03-29

    A method of tomographic phase retrieval is developed for multi-material objects whose components each has a distinct complex refractive index. The phase-retrieval algorithm, based on the Transport-of-Intensity equation, utilizes propagation-based X-ray phase contrast images acquired at a single defocus distance for each tomographic projection. The method requires a priori knowledge of the complex refractive index for each material present in the sample, together with the total projected thickness of the object at each orientation. The requirement of only a single defocus distance per projection simplifies the experimental setup and imposes no additional dose compared to conventional tomography. The algorithm was implemented using phase contrast data acquired at the SPring-8 Synchrotron facility in Japan. The three-dimensional (3D) complex refractive index distribution of a multi-material test object was quantitatively reconstructed using a single X-ray phase-contrast image per projection. The technique is robust in the presence of noise, compared to conventional absorption based tomography.

  2. Volume Segmentation and Ghost Particles

    NASA Astrophysics Data System (ADS)

    Ziskin, Isaac; Adrian, Ronald

    2011-11-01

    Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.

  3. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  4. Three Dimensional P-Wave Velocity Structure Beneath Eastern Turkey by Local Earthquake Tomography (LET) Method

    NASA Astrophysics Data System (ADS)

    Teoman, U. M.; Turkelli, N.; Gok, R.

    2005-12-01

    Recently, crustal structure and the tectonic evolution of Eastern Turkey region was extensively studied in the context of Eastern Turkey Seismic Experiment (ETSE) from late 1999 to August 2001. Collision of the Arabian and Eurasian plates has been occurring along East Anatolian Fault Zone (EAFZ) and the Bitlis Suture, which made Eastern Turkey an ideal platform for scientific research. High quality local earthquake data from the ETSE seismic network were used in order to determine the 3-D P-wave velocity structure of upper crust for Eastern Turkey. Within the 32-station network, 524 well locatable earthquakes with azimuthal gaps < 200° and number of P-wave observations > 8 (corresponding to 6842 P-phase readings) were selected from the initial data set and simultaneously inverted. 1-D reference velocity model was derived by an iterative 1-D velocity inversion including the updated hypocenters and the station delays. The following 3-D tomographic inversion was iteratively performed by SIMULPS14 algorithm in a ``damped least-squares'' sense using the appropriate ray tracing technique, model parametrization and control parameters. As far as resolution is concerned, S waves were not included in this study due to strong attenuation, insufficient number of S phase readings and higher picking errors with respect to P phases. Several tests with the synthetic data were conducted to assess the solution quality, suggesting that the velocity structure is well resolved down to ~17km. Overall,resulting 3-D P-wave velocity model led to a more reliable hypocenter determination indicated by reduced event scattering and a significant reduction of %50 both in variance and residual (rms) values.With the influence of improved velocity model, average location errors did not exceed ~1.5km in horizontal and ~4km in vertical directions. Tomographic images revealed the presence of lateral velocity variations in Eastern Turkey. Existence of relatively low velocity zones (5.6 < Vp < 6.0 km/sec) along most of the vertical profiles possibly indicates the influence of major tectonic structures such as North Anatolian Fault Zone (NAFZ), East Anatolian Fault Zone (EAFZ) and the Bitlis thrust belt correlated with the seismicity. Low velocity anomalies extend deeper along EAFZ down to ~15km compared to a depth of ~10km along NAFZ. Arabian plate is generally marked by relatively higher velocities (Vp > 6.2 km/sec) in 10-15 km depth range.

  5. Tomographic inversion of P-wave velocity and Q structures beneath the Kirishima volcanic complex, Southern Japan, based on finite difference calculations of complex traveltimes

    USGS Publications Warehouse

    Tomatsu, T.; Kumagai, H.; Dawson, P.B.

    2001-01-01

    We estimate the P-wave velocity and attenuation structures beneath the Kirishima volcanic complex, southern Japan, by inverting the complex traveltimes (arrival times and pulse widths) of waveform data obtained during an active seismic experiment conducted in 1994. In this experiment, six 200-250 kg shots were recorded at 163 temporary seismic stations deployed on the volcanic complex. We use first-arrival times for the shots, which were hand-measured interactively. The waveform data are Fourier transformed into the frequency domain and analysed using a new method based on autoregressive modelling of complex decaying oscillations in the frequency domain to determine pulse widths for the first-arrival phases. A non-linear inversion method is used to invert 893 first-arrival times and 325 pulse widths to estimate the velocity and attenuation structures of the volcanic complex. Wavefronts for the inversion are calculated with a finite difference method based on the Eikonal equation, which is well suited to estimating the complex traveltimes for the structures of the Kirishima volcano complex, where large structural heterogeneities are expected. The attenuation structure is derived using ray paths derived from the velocity structure. We obtain 3-D velocity and attenuation structures down to 1.5 and 0.5 km below sea level, respectively. High-velocity pipe-like structures with correspondingly low attenuation are found under the summit craters. These pipe-like structures are interpreted as remnant conduits of solidified magma. No evidence of a shallow magma chamber is visible in the tomographic images.

  6. High resolution seismic tomography imaging of Ireland with quarry blast data

    NASA Astrophysics Data System (ADS)

    Arroucau, P.; Lebedev, S.; Bean, C. J.; Grannell, J.

    2017-12-01

    Local earthquake tomography is a well established tool to image geological structure at depth. That technique, however, is difficult to apply in slowly deforming regions, where local earthquakes are typically rare and of small magnitude, resulting in sparse data sampling. The natural earthquake seismicity of Ireland is very low. That due to quarry and mining blasts, on the other hand, is high and homogeneously distributed. As a consequence, and thanks to the dense and nearly uniform coverage achieved in the past ten years by temporary and permanent broadband seismological stations, the quarry blasts offer an alternative approach for high resolution seismic imaging of the crust and uppermost mantle beneath Ireland. We detected about 1,500 quarry blasts in Ireland and Northern Ireland between 2011 and 2014, for which we manually picked more than 15,000 P- and 20,000 S-wave first arrival times. The anthropogenic, explosive origin of those events was unambiguously assessed based on location, occurrence time and waveform characteristics. Here, we present a preliminary 3D tomographic model obtained from the inversion of 3,800 P-wave arrival times associated with a subset of 500 events observed in 2011, using FMTOMO tomographic code. Forward modeling is performed with the Fast Marching Method (FMM) and the inverse problem is solved iteratively using a gradient-based subspace inversion scheme after careful selection of damping and smoothing regularization parameters. The results illuminate the geological structure of Ireland from deposit to crustal scale in unprecedented detail, as demonstrated by sensitivity analysis, source relocation with the 3D velocity model and comparisons with surface geology.

  7. Travel-time Tomography of the Upper Mantle using Amphibious Array Seismic Data from the Cascadia Initiative and EarthScope

    NASA Astrophysics Data System (ADS)

    Cafferky, S.; Schmandt, B.

    2013-12-01

    Offshore and onshore broadband seismic data from the Cascadia Initiative and EarthScope provide a unique opportunity to image 3-D mantle structure continuously from a spreading ridge across a subduction zone and into continental back-arc provinces. Year one data from the Cascadia Initiative primarily covers the northern half of the Juan de Fuca plate and the Cascadia forearc and arc provinces. These new data are used in concert with previously collected onshore data for a travel-time tomography investigation of mantle structure. Measurement of relative teleseismic P travel times for land-based and ocean-bottom stations operating during year one was completed for 16 events using waveform cross-correlation, after bandpass filtering the data from 0.05 - 0.1 Hz with a second order Butterworth filter. Maps of travel-time delays show changing patterns with event azimuth suggesting that structural variations exist beneath the oceanic plate. The data from year one and prior onshore travel time measurements were used in a tomographic inversion for 3-D mantle P-velocity structure. Inversions conducted to date use ray paths determined by a 1-D velocity model. By meeting time we plan to present models using ray paths that are iteratively updated to account for 3-D structure. Additionally, we are testing the importance of corrections for sediment and crust thickness on imaging of mantle structure near the subduction zone. Low-velocities beneath the Juan de Fuca slab that were previously suggested by onshore data are further supported by our preliminary tomographic inversions using the amphibious array data.

  8. Saline tracer visualized with three-dimensional electrical resistivity tomography: Field-scale spatial moment analysis

    USGS Publications Warehouse

    Singha, Kamini; Gorelick, Steven M.

    2005-01-01

    Cross-well electrical resistivity tomography (ERT) was used to monitor the migration of a saline tracer in a two-well pumping-injection experiment conducted at the Massachusetts Military Reservation in Cape Cod, Massachusetts. After injecting 2200 mg/L of sodium chloride for 9 hours, ERT data sets were collected from four wells every 6 hours for 20 days. More than 180,000 resistance measurements were collected during the tracer test. Each ERT data set was inverted to produce a sequence of 3-D snapshot maps that track the plume. In addition to the ERT experiment a pumping test and an infiltration test were conducted to estimate horizontal and vertical hydraulic conductivity values. Using modified moment analysis of the electrical conductivity tomograms, the mass, center of mass, and spatial variance of the imaged tracer plume were estimated. Although the tomograms provide valuable insights into field-scale tracer migration behavior and aquifer heterogeneity, standard tomographic inversion and application of Archie's law to convert electrical conductivities to solute concentration results in underestimation of tracer mass. Such underestimation is attributed to (1) reduced measurement sensitivity to electrical conductivity values with distance from the electrodes and (2) spatial smoothing (regularization) from tomographic inversion. The center of mass estimated from the ERT inversions coincided with that given by migration of the tracer plume using 3-D advective-dispersion simulation. The 3-D plumes seen using ERT exhibit greater apparent dispersion than the simulated plumes and greater temporal spreading than observed in field data of concentration breakthrough at the pumping well.

  9. Real-time out-of-plane artifact subtraction tomosynthesis imaging using prior CT for scanning beam digital x-ray system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Meng, E-mail: mengwu@stanford.edu; Fahrig, Rebecca

    2014-11-01

    Purpose: The scanning beam digital x-ray system (SBDX) is an inverse geometry fluoroscopic system with high dose efficiency and the ability to perform continuous real-time tomosynthesis in multiple planes. This system could be used for image guidance during lung nodule biopsy. However, the reconstructed images suffer from strong out-of-plane artifact due to the small tomographic angle of the system. Methods: The authors propose an out-of-plane artifact subtraction tomosynthesis (OPAST) algorithm that utilizes a prior CT volume to augment the run-time image processing. A blur-and-add (BAA) analytical model, derived from the project-to-backproject physical model, permits the generation of tomosynthesis images thatmore » are a good approximation to the shift-and-add (SAA) reconstructed image. A computationally practical algorithm is proposed to simulate images and out-of-plane artifacts from patient-specific prior CT volumes using the BAA model. A 3D image registration algorithm to align the simulated and reconstructed images is described. The accuracy of the BAA analytical model and the OPAST algorithm was evaluated using three lung cancer patients’ CT data. The OPAST and image registration algorithms were also tested with added nonrigid respiratory motions. Results: Image similarity measurements, including the correlation coefficient, mean squared error, and structural similarity index, indicated that the BAA model is very accurate in simulating the SAA images from the prior CT for the SBDX system. The shift-variant effect of the BAA model can be ignored when the shifts between SBDX images and CT volumes are within ±10 mm in the x and y directions. The nodule visibility and depth resolution are improved by subtracting simulated artifacts from the reconstructions. The image registration and OPAST are robust in the presence of added respiratory motions. The dominant artifacts in the subtraction images are caused by the mismatches between the real object and the prior CT volume. Conclusions: Their proposed prior CT-augmented OPAST reconstruction algorithm improves lung nodule visibility and depth resolution for the SBDX system.« less

  10. A Detailed Study of Sonar Tomographic Imaging

    DTIC Science & Technology

    2013-08-01

    BPA ) to form an object image. As the data is collected radially about the axis of rotation, one computation method computes an inverse Fourier...images are not quite as sharp. It is concluded UNCLASSIFIED iii DSTO–RR–0394 UNCLASSIFIED that polar BPA processing requires an appropriate choice of...attenuation factor to reduce the effect of the specular reflections, while for the 2DIFT BPA approach the degrading effect from these reflections is

  11. Three-dimensional tomographic imaging for dynamic radiation behavior study using infrared imaging video bolometers in large helical device plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sano, Ryuichi; Iwama, Naofumi; Peterson, Byron J.

    A three-dimensional (3D) tomography system using four InfraRed imaging Video Bolometers (IRVBs) has been designed with a helical periodicity assumption for the purpose of plasma radiation measurement in the large helical device. For the spatial inversion of large sized arrays, the system has been numerically and experimentally examined using the Tikhonov regularization with the criterion of minimum generalized cross validation, which is the standard solver of inverse problems. The 3D transport code EMC3-EIRENE for impurity behavior and related radiation has been used to produce phantoms for numerical tests, and the relative calibration of the IRVB images has been carried outmore » with a simple function model of the decaying plasma in a radiation collapse. The tomography system can respond to temporal changes in the plasma profile and identify the 3D dynamic behavior of radiation, such as the radiation enhancement that starts from the inboard side of the torus, during the radiation collapse. The reconstruction results are also consistent with the output signals of a resistive bolometer. These results indicate that the designed 3D tomography system is available for the 3D imaging of radiation. The first 3D direct tomographic measurement of a magnetically confined plasma has been achieved.« less

  12. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  13. Detection of anomalies in ocean acoustic velocity structure and their effect in sea-bottom crustal deformation measurement: synthetic test and future suggestion

    NASA Astrophysics Data System (ADS)

    Nagai, S.; Eto, S.; Tadokoro, K.; Watanabe, T.

    2011-12-01

    On-land geodetic observations are not enough to monitor crustal activities in and around the subduction zone, so seafloor geodetic observations have been required. However, present accuracy of seafloor geodetic observation is an order of 1 cm or larger, which is difficult to detect differences from plate motion in short time interval, which means a plate coupling rate and its spatio-temporal variation. Our group has been developed observation system and methodology for seafloor geodesy, which is combined kinematic GPS and ocean acoustic ranging. One of influence factors is acoustic velocity change in ocean, due to change in temperature, ocean currents in different scale, and so on. A typical perturbation of acoustic velocity makes an order of 1 ms difference in travel time, which corresponds to 1 m difference in ray length. We have investigated this effect in seafloor geodesy using both observed and synthetic data to reduce estimation error of benchmarker (transponder) positions and to develop our strategy for observation and its analyses. In this paper, we focus on forward modeling of travel times of acoustic ranging data and recovery tests using synthetic data comparing with observed results [Eto et al., 2011; in this meeting]. Estimation procedure for benchmarker positions is similar to those used in earthquake location method and seismic tomography. So we have applied methods in seismic study, especially in tomographic inversion. First, we use method of a one-dimensional velocity inversion with station corrections, proposed by Kissling et al. [1994], to detect spatio-temporal change in ocean acoustic velocity from observed data in the Suruga-Nankai Trough, Japan. From these analyses, some important information has been clarified in travel time data [Eto et al., 2011]. Most of them can explain small velocity anomaly at a depth of 300m or shallower, through forward modeling of travel time data using simple velocity structure with velocity anomaly. However, due to simple data acquisition procedure, we cannot detect velocity anomaly(s) in space and time precisely, that is size of anomaly and its (their) movement. As a next step, we demonstrate recovery of benchmarker positions in tomographic inversion using synthetic data including anomalous travel time data to develop idea to calculate benchmarker positions with high-accuracy. In the tomographic inversion, we introduce some constraints corresponding to realistic conditions. This step gives us new developed system to detect crustal deformation in seafloor geodesy and new findings for understanding these in and around plate boundaries.

  14. Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.

    PubMed

    Rao, Ying; Wang, Yanghua

    2017-08-17

    In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.

  15. Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer

    DTIC Science & Technology

    2017-01-05

    1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to...of inverse filtering performance has been challenging due to the practical difficulty in measuring the true glottal signals while speech signals are

  16. Evaluation of reconstruction errors and identification of artefacts for JET gamma and neutron tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk; Tiseanu, Ion; Zoita, Vasile

    The Joint European Torus (JET) neutron profile monitor ensures 2D coverage of the gamma and neutron emissive region that enables tomographic reconstruction. Due to the availability of only two projection angles and to the coarse sampling, tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET, but the problem of evaluating the errors associated with the reconstructed emissivity profile is still open. The reconstruction technique based on the maximum likelihood principle, that proved already to be a powerful tool for JET tomography, has been usedmore » to develop a method for the numerical evaluation of the statistical properties of the uncertainties in gamma and neutron emissivity reconstructions. The image covariance calculation takes into account the additional techniques introduced in the reconstruction process for tackling with the limited data set (projection resampling, smoothness regularization depending on magnetic field). The method has been validated by numerically simulations and applied to JET data. Different sources of artefacts that may significantly influence the quality of reconstructions and the accuracy of variance calculation have been identified.« less

  17. A quantitative comparison of soil moisture inversion algorithms

    NASA Technical Reports Server (NTRS)

    Zyl, J. J. van; Kim, Y.

    2001-01-01

    This paper compares the performance of four bare surface radar soil moisture inversion algorithms in the presence of measurement errors. The particular errors considered include calibration errors, system thermal noise, local topography and vegetation cover.

  18. Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.

    PubMed

    Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y

    1999-04-20

    A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].

  19. Accelerated gradient based diffuse optical tomographic image reconstruction.

    PubMed

    Biswas, Samir Kumar; Rajan, K; Vasu, R M

    2011-01-01

    Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.

  20. GPS Signal Feature Analysis to Detect Volcanic Plume on Mount Etna

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Aranzulla, Massimo; Scollo, Simona; Puglisi, Giuseppe; Imme', Giuseppina

    2014-05-01

    Volcanic ash produced during explosive eruptions can cause disruptions to aviation operations and to population living around active volcanoes. Thus, detection of volcanic plume becomes a crucial issue to reduce troubles connected to its presence. Nowadays, the volcanic plume detection is carried out by using different approaches such as satellites, radars and lidars. Recently, the capability of GPS to retrieve volcanic plumes has been also investigated and some tests applied to explosive activity of Etna have demonstrated that also the GPS may give useful information. In this work, we use the permanent and continuous GPS network of the Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Etneo (Italy) that consists of 35 stations located all around volcano flanks. Data are processed by the GAMIT package developed by Massachusetts Institute of Technology. Here we investigate the possibility to quantify the volcanic plume through the GPS signal features and to estimate its spatial distribution by means of a tomographic inversion algorithm. The method is tested on volcanic plumes produced during the lava fountain of 4-5 September 2007, already used to confirm if weak explosive activity may or may not affect the GPS signals.

  1. A Hybrid Approach to Data Assimilation for Reconstructing the Evolution of Mantle Dynamics

    NASA Astrophysics Data System (ADS)

    Zhou, Quan; Liu, Lijun

    2017-11-01

    Quantifying past mantle dynamic processes represents a major challenge in understanding the temporal evolution of the solid earth. Mantle convection modeling with data assimilation is one of the most powerful tools to investigate the dynamics of plate subduction and mantle convection. Although various data assimilation methods, both forward and inverse, have been created, these methods all have limitations in their capabilities to represent the real earth. Pure forward models tend to miss important mantle structures due to the incorrect initial condition and thus may lead to incorrect mantle evolution. In contrast, pure tomography-based models cannot effectively resolve the fine slab structure and would fail to predict important subduction-zone dynamic processes. Here we propose a hybrid data assimilation approach that combines the unique power of the sequential and adjoint algorithms, which can properly capture the detailed evolution of the downgoing slab and the tomographically constrained mantle structures, respectively. We apply this new method to reconstructing mantle dynamics below the western U.S. while considering large lateral viscosity variations. By comparing this result with those from several existing data assimilation methods, we demonstrate that the hybrid modeling approach recovers the realistic 4-D mantle dynamics the best.

  2. Time-resolved diffusion tomographic imaging in highly scattering turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Liu, Feng (Inventor); Lax, Melvin (Inventor); Das, Bidyut B. (Inventor)

    1998-01-01

    A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: X.sup.(k+1).spsp.T =?Y.sup.T W+X.sup.(k).spsp.T .LAMBDA.!?W.sup.T W+.LAMBDA.!.sup.-1 wherein W is a matrix relating output at detector position r.sub.d, at time t, to source at position r.sub.s, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise, to fluctuations in the absorption (or diffusion) X.sub.j that we are trying to determine: .LAMBDA..sub.ij =.lambda..sub.j .delta..sub.ij with .lambda..sub.j =/<.DELTA.Xj.DELTA.Xj> Here Y is the data collected at the detectors, and X.sup.k is the kth iterate toward the desired absoption information.

  3. A Hybrid Forward-Adjoint Data Assimilation Method for Reconstructing the Temporal Evolution of Mantle Dynamics

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Liu, L.

    2017-12-01

    Quantifying past mantle dynamic processes represents a major challenge in understanding the temporal evolution of the solid earth. Mantle convection modeling with data assimilation is one of the most powerful tools to investigate the dynamics of plate subduction and mantle convection. Although various data assimilation methods, both forward and inverse, have been created, these methods all have limitations in their capabilities to represent the real earth. Pure forward models tend to miss important mantle structures due to the incorrect initial condition and thus may lead to incorrect mantle evolution. In contrast, pure tomography-based models cannot effectively resolve the fine slab structure and would fail to predict important subduction-zone dynamic processes. Here we propose a hybrid data assimilation method that combines the unique power of the sequential and adjoint algorithms, which can properly capture the detailed evolution of the downgoing slab and the tomographically constrained mantle structures, respectively. We apply this new method to reconstructing mantle dynamics below the western U.S. while considering large lateral viscosity variations. By comparing this result with those from several existing data assimilation methods, we demonstrate that the hybrid modeling approach recovers the realistic 4-D mantle dynamics to the best.

  4. Semi-Tomographic Gamma Scanning Technique for Non-Destructive Assay of Radioactive Waste Drums

    NASA Astrophysics Data System (ADS)

    Gu, Weiguo; Rao, Kaiyuan; Wang, Dezhong; Xiong, Jiemei

    2016-12-01

    Segmented gamma scanning (SGS) and tomographic gamma scanning (TGS) are two traditional detection techniques for low and intermediate level radioactive waste drum. This paper proposes one detection method named semi-tomographic gamma scanning (STGS) to avoid the poor detection accuracy of SGS and shorten detection time of TGS. This method and its algorithm synthesize the principles of SGS and TGS. In this method, each segment is divided into annual voxels and tomography is used in the radiation reconstruction. The accuracy of STGS is verified by experiments and simulations simultaneously for the 208 liter standard waste drums which contains three types of nuclides. The cases of point source or multi-point sources, uniform or nonuniform materials are employed for comparison. The results show that STGS exhibits a large improvement in the detection performance, and the reconstruction error and statistical bias are reduced by one quarter to one third or less for most cases if compared with SGS.

  5. Development of a high-performance noise-reduction filter for tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Kao, Chien-Min; Pan, Xiaochuan

    2001-07-01

    We propose a new noise-reduction method for tomographic reconstruction. The method incorporates a priori information on the source image for allowing the derivation of the energy spectrum of its ideal sinogram. In combination with the energy spectrum of the Poisson noise in the measured sinogram, we are able to derive a Wiener-like filter for effective suppression of the sinogram noise. The filtered backprojection (FBP) algorithm, with a ramp filter, is then applied to the filtered sinogram to produce tomographic images. The resulting filter has a closed-form expression in the frequency space and contains a single user-adjustable regularization parameter. The proposed method is hence simple to implement and easy to use. In contrast to the ad hoc apodizing windows, such as Hanning and Butterworth filters, that are commonly used in the conventional FBP reconstruction, the proposed filter is theoretically more rigorous as it is derived by basing upon an optimization criterion, subject to a known class of source image intensity distributions.

  6. Amplitude inversion of the 2D analytic signal of magnetic anomalies through the differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ekinci, Yunus Levent; Özyalın, Şenol; Sındırgı, Petek; Balkaya, Çağlayan; Göktürkler, Gökhan

    2017-12-01

    In this work, analytic signal amplitude (ASA) inversion of total field magnetic anomalies has been achieved by differential evolution (DE) which is a population-based evolutionary metaheuristic algorithm. Using an elitist strategy, the applicability and effectiveness of the proposed inversion algorithm have been evaluated through the anomalies due to both hypothetical model bodies and real isolated geological structures. Some parameter tuning studies relying mainly on choosing the optimum control parameters of the algorithm have also been performed to enhance the performance of the proposed metaheuristic. Since ASAs of magnetic anomalies are independent of both ambient field direction and the direction of magnetization of the causative sources in a two-dimensional (2D) case, inversions of synthetic noise-free and noisy single model anomalies have produced satisfactory solutions showing the practical applicability of the algorithm. Moreover, hypothetical studies using multiple model bodies have clearly showed that the DE algorithm is able to cope with complicated anomalies and some interferences from neighbouring sources. The proposed algorithm has then been used to invert small- (120 m) and large-scale (40 km) magnetic profile anomalies of an iron deposit (Kesikköprü-Bala, Turkey) and a deep-seated magnetized structure (Sea of Marmara, Turkey), respectively to determine depths, geometries and exact origins of the source bodies. Inversion studies have yielded geologically reasonable solutions which are also in good accordance with the results of normalized full gradient and Euler deconvolution techniques. Thus, we propose the use of DE not only for the amplitude inversion of 2D analytical signals of magnetic profile anomalies having induced or remanent magnetization effects but also the low-dimensional data inversions in geophysics. A part of this paper was presented as an abstract at the 2nd International Conference on Civil and Environmental Engineering, 8-10 May 2017, Cappadocia-Nevşehir (Turkey).

  7. Computerized tomographic quantification of chronic obstructive pulmonary disease as the principal determinant of frontal P vector.

    PubMed

    Chhabra, Lovely; Sareen, Pooja; Gandagule, Amit; Spodick, David

    2012-04-01

    Verticalization of the P-wave axis is characteristic of chronic obstructive pulmonary disease (COPD). We studied the correlation of P-wave axis and computerized tomographically quantified emphysema in patients with COPD/emphysema. Individual correlation of P-wave axis with different structural types of emphysema was also studied. High-resolution computerized tomographic scans of 23 patients >45 years old with known COPD were reviewed to assess the type and extent of emphysema using computerized tomographic densitometric parameters. Electrocardiograms were then independently reviewed and the P-wave axis was calculated in customary fashion. Degree of the P vector (DOPV) and radiographic percent emphysematous area (RPEA) were compared for statistical correlation. The P vector and RPEA were also directly compared to the forced expiratory volume at 1 second. RPEA and the P vector had a significant positive correlation in all patients (r = +0.77, p <0.0001) but correlation was very strong in patients with predominant lower lobe emphysema (r = +0.89, p <0.001). Forced expiratory volume at 1 second and the P vector had almost a linear inverse correlation in predominantly lower lobe emphysema (r = -0.92, p <0.001). DOPV positively correlated with radiographically quantified emphysema. DOPV and RPEA were strong predictors of qualitative lung function in patients with predominantly lower lobe emphysema. In conclusion, a combination of high DOPV and predominantly lower lobe emphysema indicates severe obstructive lung dysfunction in patients with COPD. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Micro-seismic waveform matching inversion based on gravitational search algorithm and parallel computation

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Xing, H. L.

    2016-12-01

    Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation

  9. Using a derivative-free optimization method for multiple solutions of inverse transport problems

    DOE PAGES

    Armstrong, Jerawan C.; Favorite, Jeffrey A.

    2016-01-14

    Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less

  10. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  11. In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie

    2015-03-01

    Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.

  12. Satellite Imagery Analysis for Nighttime Temperature Inversion Clouds

    NASA Technical Reports Server (NTRS)

    Kawamoto, K.; Minnis, P.; Arduini, R.; Smith, W., Jr.

    2001-01-01

    Clouds play important roles in the climate system. Their optical and microphysical properties, which largely determine their radiative property, need to be investigated. Among several measurement means, satellite remote sensing seems to be the most promising. Since most of the cloud algorithms proposed so far are daytime use which utilizes solar radiation, Minnis et al. (1998) developed a nighttime use one using 3.7-, 11 - and 12-microns channels. Their algorithm, however, has a drawback that is not able to treat temperature inversion cases. We update their algorithm, incorporating new parameterization by Arduini et al. (1999) which is valid for temperature inversion cases. This updated algorithm has been applied to GOES satellite data and reasonable retrieval results were obtained.

  13. TomoEED: Fast Edge-Enhancing Denoising of Tomographic Volumes.

    PubMed

    Moreno, J J; Martínez-Sánchez, A; Martínez, J A; Garzón, E M; Fernández, J J

    2018-05-29

    TomoEED is an optimized software tool for fast feature-preserving noise filtering of large 3D tomographic volumes on CPUs and GPUs. The tool is based on the anisotropic nonlinear diffusion method. It has been developed with special emphasis in the reduction of the computational demands by using different strategies, from the algorithmic to the high performance computing perspectives. TomoEED manages to filter large volumes in a matter of minutes in standard computers. TomoEED has been developed in C. It is available for Linux platforms at http://www.cnb.csic.es/%7ejjfernandez/tomoeed. gmartin@ual.es, JJ.Fernandez@csic.es. Supplementary data are available at Bioinformatics online.

  14. DART, a platform for the creation and registration of cone beam digital tomosynthesis datasets.

    PubMed

    Sarkar, Vikren; Shi, Chengyu; Papanikolaou, Niko

    2011-04-01

    Digital tomosynthesis is an imaging modality that allows for tomographic reconstructions using only a fraction of the images needed for CT reconstruction. Since it offers the advantages of tomographic images with a smaller imaging dose delivered to the patient, the technique offers much promise for use in patient positioning prior to radiation delivery. This paper describes a software environment developed to help in the creation of digital tomosynthesis image sets from digital portal images using three different reconstruction algorithms. The software then allows for use of the tomograms for patient positioning or for dose recalculation if shifts are not applied, possibly as part of an adaptive radiotherapy regimen.

  15. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  16. Validation of Spherically Symmetric Inversion by Use of a Tomographically Reconstructed Three-Dimensional Electron Density of the Solar Corona

    NASA Technical Reports Server (NTRS)

    Wang, Tongjiang; Davila, Joseph M.

    2014-01-01

    Determining the coronal electron density by the inversion of white-light polarized brightness (pB) measurements by coronagraphs is a classic problem in solar physics. An inversion technique based on the spherically symmetric geometry (spherically symmetric inversion, SSI) was developed in the 1950s and has been widely applied to interpret various observations. However, to date there is no study of the uncertainty estimation of this method. We here present the detailed assessment of this method using a three-dimensional (3D) electron density in the corona from 1.5 to 4 solar radius as a model, which is reconstructed by a tomography method from STEREO/COR1 observations during the solar minimum in February 2008 (Carrington Rotation, CR 2066).We first show in theory and observation that the spherically symmetric polynomial approximation (SSPA) method and the Van de Hulst inversion technique are equivalent. Then we assess the SSPA method using synthesized pB images from the 3D density model, and find that the SSPA density values are close to the model inputs for the streamer core near the plane of the sky (POS) with differences generally smaller than about a factor of two; the former has the lower peak but extends more in both longitudinal and latitudinal directions than the latter. We estimate that the SSPA method may resolve the coronal density structure near the POS with angular resolution in longitude of about 50 deg. Our results confirm the suggestion that the SSI method is applicable to the solar minimum streamer (belt), as stated in some previous studies. In addition, we demonstrate that the SSPA method can be used to reconstruct the 3D coronal density, roughly in agreement with the reconstruction by tomography for a period of low solar activity (CR 2066). We suggest that the SSI method is complementary to the 3D tomographic technique in some cases, given that the development of the latter is still an ongoing research effort.

  17. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    USGS Publications Warehouse

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.

  18. Digital tomosynthesis (DTS) with a Circular X-ray tube: Its image reconstruction based on total-variation minimization and the image characteristics

    NASA Astrophysics Data System (ADS)

    Park, Y. O.; Hong, D. K.; Cho, H. S.; Je, U. K.; Oh, J. E.; Lee, M. S.; Kim, H. J.; Lee, S. H.; Jang, W. S.; Cho, H. M.; Choi, S. I.; Koo, Y. S.

    2013-09-01

    In this paper, we introduce an effective imaging system for digital tomosynthesis (DTS) with a circular X-ray tube, the so-called circular-DTS (CDTS) system, and its image reconstruction algorithm based on the total-variation (TV) minimization method for low-dose, high-accuracy X-ray imaging. Here, the X-ray tube is equipped with a series of cathodes distributed around a rotating anode, and the detector remains stationary throughout the image acquisition. We considered a TV-based reconstruction algorithm that exploited the sparsity of the image with substantially high image accuracy. We implemented the algorithm for the CDTS geometry and successfully reconstructed images of high accuracy. The image characteristics were investigated quantitatively by using some figures of merit, including the universal-quality index (UQI) and the depth resolution. For selected tomographic angles of 20, 40, and 60°, the corresponding UQI values in the tomographic view were estimated to be about 0.94, 0.97, and 0.98, and the depth resolutions were about 4.6, 3.1, and 1.2 voxels in full width at half maximum (FWHM), respectively. We expect the proposed method to be applicable to developing a next-generation dental or breast X-ray imaging system.

  19. GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging

    DOE PAGES

    Pryor, Alan; Yang, Yongsoo; Rana, Arjun; ...

    2017-09-05

    Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less

  20. GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pryor, Alan; Yang, Yongsoo; Rana, Arjun

    Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less

  1. Validation Studies of the Accuracy of Various SO2 Gas Retrievals in the Thermal InfraRed (8-14 μm)

    NASA Astrophysics Data System (ADS)

    Gabrieli, A.; Wright, R.; Lucey, P. G.; Porter, J. N.; Honniball, C.; Garbeil, H.; Wood, M.

    2016-12-01

    Quantifying hazardous SO2 in the atmosphere and in volcanic plumes is important for public health and volcanic eruption prediction. Remote sensing measurements of spectral radiance of plumes contain information on the abundance of SO2. However, in order to convert such measurements into SO2 path-concentrations, reliable inversion algorithms are needed. Various techniques can be employed to derive SO2 path-concentrations. The first approach employs a Partial Least Square Regression model trained using MODTRAN5 simulations for a variety of plume and atmospheric conditions. Radiances at many spectral wavelengths (8-14 μm) were used in the algorithm. The second algorithm uses measurements inside and outside the SO2 plume. Measurements in the plume-free region (background sky) make it possible to remove background atmospheric conditions and any instrumental effects. After atmospheric and instrumental effects are removed, MODTRAN5 is used to fit the SO2 spectral feature and obtain SO2 path-concentrations. The two inversion algorithms described above can be compared with the inversion algorithm for SO2 retrievals developed by Prata and Bernardo (2014). Their approach employs three wavelengths to characterize the plume temperature, the atmospheric background, and the SO2 path-concentration. The accuracy of these various techniques requires further investigation in terms of the effects of different atmospheric background conditions. Validating these inversion algorithms is challenging because ground truth measurements are very difficult. However, if the three separate inversion algorithms provide similar SO2 path-concentrations for actual measurements with various background conditions, then this increases confidence in the results. Measurements of sky radiance when looking through SO2 filled gas cells were collected with a Thermal Hyperspectral Imager (THI) under various atmospheric background conditions. These data were processed using the three inversion approaches, which were tested for convergence on the known SO2 gas cell path-concentrations. For this study, the inversion algorithms were modified to account for the gas cell configuration. Results from these studies will be presented, as well as results from SO2 gas plume measurements at Kīlauea volcano, Hawai'i.

  2. Full-Wave Tomographic and Moment Tensor Inversion Based on 3D Multigrid Strain Green’s Tensor Databases

    DTIC Science & Technology

    2014-04-30

    grade metamorphic rocks on the southern slope of the Himalaya is imaged as a band of high velocity anomaly...velocity structures closely follow the geological features. As an indication of resolution, the ductile extrusion of high-grade metamorphic rocks on...MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM 87117-5776 DTIC COPY NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data

  3. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  5. 3D Cosmic Ray Muon Tomography from an Underground Tunnel

    DOE PAGES

    Guardincerri, Elena; Rowe, Charlotte Anne; Schultz-Fellenz, Emily S.; ...

    2017-03-31

    Here, we present an underground cosmic ray muon tomographic experiment imaging 3D density of overburden, part of a joint study with differential gravity. Muon data were acquired at four locations within a tunnel beneath Los Alamos, New Mexico, and used in a 3D tomographic inversion to recover the spatial variation in the overlying rock–air interface, and compared with a priori knowledge of the topography. Densities obtained exhibit good agreement with preliminary results of the gravity modeling, which will be presented elsewhere, and are compatible with values reported in the literature. The modeled rock–air interface matches that obtained from LIDAR withinmore » 4 m, our resolution, over much of the model volume. This experiment demonstrates the power of cosmic ray muons to image shallow geological targets using underground detectors, whose development as borehole devices will be an important new direction of passive geophysical imaging.« less

  6. 3D Cosmic Ray Muon Tomography from an Underground Tunnel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guardincerri, Elena; Rowe, Charlotte Anne; Schultz-Fellenz, Emily S.

    Here, we present an underground cosmic ray muon tomographic experiment imaging 3D density of overburden, part of a joint study with differential gravity. Muon data were acquired at four locations within a tunnel beneath Los Alamos, New Mexico, and used in a 3D tomographic inversion to recover the spatial variation in the overlying rock–air interface, and compared with a priori knowledge of the topography. Densities obtained exhibit good agreement with preliminary results of the gravity modeling, which will be presented elsewhere, and are compatible with values reported in the literature. The modeled rock–air interface matches that obtained from LIDAR withinmore » 4 m, our resolution, over much of the model volume. This experiment demonstrates the power of cosmic ray muons to image shallow geological targets using underground detectors, whose development as borehole devices will be an important new direction of passive geophysical imaging.« less

  7. 3D Cosmic Ray Muon Tomography from an Underground Tunnel

    NASA Astrophysics Data System (ADS)

    Guardincerri, Elena; Rowe, Charlotte; Schultz-Fellenz, Emily; Roy, Mousumi; George, Nicolas; Morris, Christopher; Bacon, Jeffrey; Durham, Matthew; Morley, Deborah; Plaud-Ramos, Kenie; Poulson, Daniel; Baker, Diane; Bonneville, Alain; Kouzes, Richard

    2017-05-01

    We present an underground cosmic ray muon tomographic experiment imaging 3D density of overburden, part of a joint study with differential gravity. Muon data were acquired at four locations within a tunnel beneath Los Alamos, New Mexico, and used in a 3D tomographic inversion to recover the spatial variation in the overlying rock-air interface, and compared with a priori knowledge of the topography. Densities obtained exhibit good agreement with preliminary results of the gravity modeling, which will be presented elsewhere, and are compatible with values reported in the literature. The modeled rock-air interface matches that obtained from LIDAR within 4 m, our resolution, over much of the model volume. This experiment demonstrates the power of cosmic ray muons to image shallow geological targets using underground detectors, whose development as borehole devices will be an important new direction of passive geophysical imaging.

  8. Applications of Collisional Radiative Modeling of Helium and Deuterium for Image Tomography Diagnostic of Te, Ne, and ND in the DIII-D Tokamak

    NASA Astrophysics Data System (ADS)

    Munoz Burgos, J. M.; Brooks, N. H.; Fenstermacher, M. E.; Meyer, W. H.; Unterberg, E. A.; Schmitz, O.; Loch, S. D.; Balance, C. P.

    2011-10-01

    We apply new atomic modeling techniques to helium and deuterium for diagnostics in the divertor and scrape-off layer regions. Analysis of tomographically inverted images is useful for validating detachment prediction models and power balances in the divertor. We apply tomographic image inversion from fast tangential cameras of helium and Dα emission at the divertor in order to obtain 2D profiles of Te, Ne, and ND (neutral ion density profiles). The accuracy of the atomic models for He I will be cross-checked against Thomson scattering measurements of Te and Ne. This work summarizes several current developments and applications of atomic modeling into diagnostic at the DIII-D tokamak. Supported in part by the US DOE under DE-AC05-06OR23100, DE-FC02-04ER54698, DE-AC52-07NA27344, and DE-AC05-00OR22725.

  9. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  10. Fast Nonlinear Generalized Inversion of Gravity Data with Application to the Three-Dimensional Crustal Density Structure of Sichuan Basin, Southwest China

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Meng, Xiaohong; Li, Fang

    2017-11-01

    Generalized inversion is one of the important steps in the quantitative interpretation of gravity data. With appropriate algorithm and parameters, it gives a view of the subsurface which characterizes different geological bodies. However, generalized inversion of gravity data is time consuming due to the large amount of data points and model cells adopted. Incorporating of various prior information as constraints deteriorates the above situation. In the work discussed in this paper, a method for fast nonlinear generalized inversion of gravity data is proposed. The fast multipole method is employed for forward modelling. The inversion objective function is established with weighted data misfit function along with model objective function. The total objective function is solved by a dataspace algorithm. Moreover, depth weighing factor is used to improve depth resolution of the result, and bound constraint is incorporated by a transfer function to limit the model parameters in a reliable range. The matrix inversion is accomplished by a preconditioned conjugate gradient method. With the above algorithm, equivalent density vectors can be obtained, and interpolation is performed to get the finally density model on the fine mesh in the model domain. Testing on synthetic gravity data demonstrated that the proposed method is faster than conventional generalized inversion algorithm to produce an acceptable solution for gravity inversion problem. The new developed inversion method was also applied for inversion of the gravity data collected over Sichuan basin, southwest China. The established density structure in this study helps understanding the crustal structure of Sichuan basin and provides reference for further oil and gas exploration in this area.

  11. Electrical Tomography for seismic hazard monitoring: state-of-the-art and future challenges.

    NASA Astrophysics Data System (ADS)

    Lapenna, Vincenzo; Piscitelli, Sabatino

    2010-05-01

    The Self-Potential (passive) and DC resistivity (active) methods have been considered for a long period as ancillary and/or secondary tools in geophysical exploration, simplified procedures for data processing and purely qualitative techniques for data inversion were the main drawbacks. Recently, innovative algorithms for tomographic data inversion, new models for describing the electrokinetic phenomena associated to the subsurface fluid migration and modern technologies for the field surveying have rapidly transformed these geoelectrical methods in powerful tools for geo-hazard monitoring. These technological and methodological improvements disclose the way for a wide spectra of interesting and challenging applications: mapping of the water content in landslide bodies; identification of fluid and gas emissions in volcanic areas; search of earthquake precursors. In this work we briefly resume the current start-of-the-art and analyse the new applications of the Electrical Tomography in the seismic hazard monitoring. An overview of the more interesting results obtained in different worldwide areas (i.e. Mediterranean Basin, California, Japan) is presented and discussed. To-date, combining novel techniques for data inversion and new strategies for the field data acquisition is possible to obtain high-resolution electrical images of complex geological structures. One of the key challenges for the near-future will be the integration of active (DC resistivity) and passive (Self-Potential) measurements for obtaining 2D, 3D and 4D electrical tomographies able to follow the spatial and temporal dynamics of electrical parameters (i.e. resistivity, self-potential). This approach could reduce the ambiguities related to the interpretation of anomalous SP signals in seismic active areas and their applicability for short-term earthquake prediction. The resistivity imaging can be applied for illuminating the fault geometry, while the SP imaging is the key instrument for capturing the fingerprints of the electrokinetic phenomena potentially generated in focal regions.

  12. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  13. A gradient based algorithm to solve inverse plane bimodular problems of identification

    NASA Astrophysics Data System (ADS)

    Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing

    2018-02-01

    This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.

  14. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  15. Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm

    NASA Astrophysics Data System (ADS)

    Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.

    2003-10-01

    We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.

  16. Multi-GPU parallel algorithm design and analysis for improved inversion of probability tomography with gravity gradiometry data

    NASA Astrophysics Data System (ADS)

    Hou, Zhenlong; Huang, Danian

    2017-09-01

    In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.

  17. Comparison of trend analyses for Umkehr data using new and previous inversion algorithms

    NASA Technical Reports Server (NTRS)

    Reinsel, Gregory C.; Tam, Wing-Kuen; Ying, Lisa H.

    1994-01-01

    Ozone vertical profile Umkehr data for layers 3-9 obtained from 12 stations, using both previous and new inversion algorithms, were analyzed for trends. The trends estimated for the Umkehr data from the two algorithms were compared using two data periods, 1968-1991 and 1977-1991. Both nonseasonal and seasonal trend models were fitted. The overall annual trends are found to be significantly negative, of the order of -5% per decade, for layers 7 and 8 using both inversion algorithms. The largest negative trends occur in these layers under the new algorithm, whereas in the previous algorithm the most negative trend occurs in layer 9. The trend estimates, both annual and seasonal, are substantially different between the two algorithms mainly for layers 3, 4, and 9, where trends from the new algorithm data are about 2% per decade less negative, with less appreciable differences in layers 7 and 8. The trend results from the two data periods are similar, except for layer 3 where trends become more negative, by about -2% per decade, for 1977-1991.

  18. VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm

    NASA Astrophysics Data System (ADS)

    Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo

    2015-01-01

    Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.

  19. Accommodating Chromosome Inversions in Linkage Analysis

    PubMed Central

    Chen, Gary K.; Slaten, Erin; Ophoff, Roel A.; Lange, Kenneth

    2006-01-01

    This work develops a population-genetics model for polymorphic chromosome inversions. The model precisely describes how an inversion changes the nature of and approach to linkage equilibrium. The work also describes algorithms and software for allele-frequency estimation and linkage analysis in the presence of an inversion. The linkage algorithms implemented in the software package Mendel estimate recombination parameters and calculate the posterior probability that each pedigree member carries the inversion. Application of Mendel to eight Centre d'Étude du Polymorphisme Humain pedigrees in a region containing a common inversion on 8p23 illustrates its potential for providing more-precise estimates of the location of an unmapped marker or trait gene. Our expanded cytogenetic analysis of these families further identifies inversion carriers and increases the evidence of linkage. PMID:16826515

  20. First results from a full-waveform inversion of the African continent using Salvus

    NASA Astrophysics Data System (ADS)

    van Herwaarden, D. P.; Afanasiev, M.; Krischer, L.; Trampert, J.; Fichtner, A.

    2017-12-01

    We present the initial results from an elastic full-waveform inversion (FWI) of the African continent which is melded together within the framework of the Collaborative Seismic Earth Model (CSEM) project. The continent of Africa is one of the most geophysically interesting regions on the planet. More specifically, Africa contains the Afar Depression, which is the only place on Earth where incipient seafloor spreading is sub-aerially exposed, along with other anomalous features such as the topography in the south, and several smaller surface expressions such as the Cameroon Volcanic Line and Congo Basin. Despite its significance, relatively few tomographic images exist of Africa, and, as a result, the debate on the geophysical origins of Africa's anomalies is rich and ongoing. Tomographic images of Africa present unique challenges due to uneven station coverage: while tectonically active areas such as the Afar rift are well sampled, much of the continent exhibits a severe lack of seismic stations. And, while Africa is mostly surrounded by tectonically active spreading plate boundaries, the interior of the continent is seismically quiet. To mitigate such issues, our simulation domain is extended to include earthquakes occurring in the South Atlantic and along the western edge of South America. Waveform modelling and inversion is performed using Salvus, a flexible and high-performance software suite based on the spectral-element method. Recently acquired recordings from the AfricaArray and NARS seismic networks are used to complement data obtained from global networks. We hope that this new model presents a fresh high-resolution image of African geodynamic structure, and helps advance the debate regarding the causative mechanisms of its surface anomalies.

  1. Improved preconditioned conjugate gradient algorithm and application in 3D inversion of gravity-gradiometry data

    NASA Astrophysics Data System (ADS)

    Wang, Tai-Han; Huang, Da-Nian; Ma, Guo-Qing; Meng, Zhao-Hai; Li, Ye

    2017-06-01

    With the continuous development of full tensor gradiometer (FTG) measurement techniques, three-dimensional (3D) inversion of FTG data is becoming increasingly used in oil and gas exploration. In the fast processing and interpretation of large-scale high-precision data, the use of the graphics processing unit process unit (GPU) and preconditioning methods are very important in the data inversion. In this paper, an improved preconditioned conjugate gradient algorithm is proposed by combining the symmetric successive over-relaxation (SSOR) technique and the incomplete Choleksy decomposition conjugate gradient algorithm (ICCG). Since preparing the preconditioner requires extra time, a parallel implement based on GPU is proposed. The improved method is then applied in the inversion of noisecontaminated synthetic data to prove its adaptability in the inversion of 3D FTG data. Results show that the parallel SSOR-ICCG algorithm based on NVIDIA Tesla C2050 GPU achieves a speedup of approximately 25 times that of a serial program using a 2.0 GHz Central Processing Unit (CPU). Real airborne gravity-gradiometry data from Vinton salt dome (southwest Louisiana, USA) are also considered. Good results are obtained, which verifies the efficiency and feasibility of the proposed parallel method in fast inversion of 3D FTG data.

  2. Tomographic Imaging of the Suns Interior

    NASA Technical Reports Server (NTRS)

    Kosovichev, A. G.

    1996-01-01

    A new method is presented of determining the three-dimensional sound-speed structure and flow velocities in the solar convection zone by inversion of the acoustic travel-time data recently obtained by Duvall and coworkers. The initial inversion results reveal large-scale subsurface structures and flows related to the active regions, and are important for understanding the physics of solar activity and large-scale convection. The results provide evidence of a zonal structure below the surface in the low-latitude area of the magnetic activity. Strong converging downflows, up to 1.2 km/s, and a substantial excess of the sound speed are found beneath growing active regions. In a decaying active region, there is evidence for the lower than average sound speed and for upwelling of plasma.

  3. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  4. Natural pixel decomposition for computational tomographic reconstruction from interferometric projection: algorithms and comparison

    NASA Astrophysics Data System (ADS)

    Cha, Don J.; Cha, Soyoung S.

    1995-09-01

    A computational tomographic technique, termed the variable grid method (VGM), has been developed for improving interferometric reconstruction of flow fields under ill-posed data conditions of restricted scanning and incomplete projection. The technique is based on natural pixel decomposition, that is, division of a field into variable grid elements. The performances of two algorithms, that is, original and revised versions, are compared to investigate the effects of the data redundancy criteria and seed element forming schemes. Tests of the VGMs are conducted through computer simulation of experiments and reconstruction of fields with a limited view angel of 90 degree(s). The temperature fields at two horizontal sections of a thermal plume of two interacting isothermal cubes, produced by a finite numerical code, are analyzed as test fields. The computer simulation demonstrates the superiority of the revised VGM to either the conventional fixed grid method or the original VGM. Both the maximum and average reconstruction errors are reduced appreciably. The reconstruction shows substantial improvement in the regions with dense scanning by probing rays. These regions are usually of interest in engineering applications.

  5. A New Inversion-Based Algorithm for Retrieval of Over-Water Rain Rate from SSM/I Multichannel Imagery

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.; Stettner, David R.

    1994-01-01

    This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.

  6. VLSI architectures for computing multiplications and inverses in GF(2m)

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.

    1985-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  7. VLSI architectures for computing multiplications and inverses in GF(2-m)

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.

    1983-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  8. VLSI architectures for computing multiplications and inverses in GF(2m).

    PubMed

    Wang, C C; Truong, T K; Shao, H M; Deutsch, L J; Omura, J K; Reed, I S

    1985-08-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that can be easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. In this paper, a pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal basis representation used together with this multiplier, a pipeline architecture is developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable, and therefore, naturally suitable for VLSI implementation.

  9. Acoustic Inversion in Optoacoustic Tomography: A Review

    PubMed Central

    Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel

    2013-01-01

    Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060

  10. Tomographic Processing of Synthetic Aperture Radar Signals for Enhanced Resolution

    DTIC Science & Technology

    1989-11-01

    to image 3 larger scenes, this problem becomes more important. A byproduct of this investigation is a duality theorem which is a generalization of the...well-known Projection-Slice Theorem . The second prob- - lem proposed is that of imaging a rapidly-spinning object, for example in inverse SAR mode...slices is absent. There is a possible connection of the word to the Projection-Slice Theorem , but, as seen in Chapter 4, even this is absent in the

  11. Seismic Calibration of Group 1 IMS Stations in Eastern Asia for Improved IDC Event Location

    DTIC Science & Technology

    2006-04-01

    database has been assembled and delivered to the SMR (formerly CMR) Research and Development Support Services (RDSS) data archive. This database ...Data used in these tomographic inversions have been collected into a uniform database and delivered to the RDSS at the SMR. Extensive testing of these...complex 3-D velocity models is based on a finite difference approximation to the eikonal equation developed by Podvin and Lecomte (1 991) and

  12. Calculation method of water injection forward modeling and inversion process in oilfield water injection network

    NASA Astrophysics Data System (ADS)

    Liu, Long; Liu, Wei

    2018-04-01

    A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.

  13. Preliminary result of P-wave speed tomography beneath North Sumatera region

    NASA Astrophysics Data System (ADS)

    Jatnika, Jajat; Nugraha, Andri Dian; Wandono

    2015-04-01

    The structure of P-wave speed beneath the North Sumatra region was determined using P-wave arrival times compiled by MCGA from time periods of January 2009 to December 2012 combining with PASSCAL data for February to May 1995. In total, there are 2,246 local earthquake events with 10,666 P-wave phases from 63 stations seismic around the study area. Ray tracing to estimate travel time from source to receiver in this study by applying pseudo-bending method while the damped LSQR method was used for the tomographic inversion. Based on assessment of ray coverage, earthquakes and stations distribution, horizontal grid nodes was set up of 30×30 km2 for inside the study area and 80×80 km2 for outside the study area. The tomographic inversion results show low Vp anomaly beneath Toba caldera complex region and around the Sumatra Fault Zones (SFZ). These features are consistent with previous study. The low Vp anomaly beneath Toba caldera complex are observed around Mt. Pusuk Bukit at depths of 5 km down to 100 km. The interpretation is these anomalies may be associated with ascending hot materials from subduction processes at depths of 80 km down to 100 km. The obtained Vp structure from local tomography will give valuable information to enhance understanding of tectonic and volcanic in this study area.

  14. Preliminary result of P-wave speed tomography beneath North Sumatera region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jatnika, Jajat; Indonesian Meteorological, Climatological and Geophysical Agency; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    2015-04-24

    The structure of P-wave speed beneath the North Sumatra region was determined using P-wave arrival times compiled by MCGA from time periods of January 2009 to December 2012 combining with PASSCAL data for February to May 1995. In total, there are 2,246 local earthquake events with 10,666 P-wave phases from 63 stations seismic around the study area. Ray tracing to estimate travel time from source to receiver in this study by applying pseudo-bending method while the damped LSQR method was used for the tomographic inversion. Based on assessment of ray coverage, earthquakes and stations distribution, horizontal grid nodes was setmore » up of 30×30 km2 for inside the study area and 80×80 km2 for outside the study area. The tomographic inversion results show low Vp anomaly beneath Toba caldera complex region and around the Sumatra Fault Zones (SFZ). These features are consistent with previous study. The low Vp anomaly beneath Toba caldera complex are observed around Mt. Pusuk Bukit at depths of 5 km down to 100 km. The interpretation is these anomalies may be associated with ascending hot materials from subduction processes at depths of 80 km down to 100 km. The obtained Vp structure from local tomography will give valuable information to enhance understanding of tectonic and volcanic in this study area.« less

  15. Evidence for the contemporary magmatic system beneath Long Valley Caldera from local earthquake tomography and receiver function analysis

    USGS Publications Warehouse

    Seccia, D.; Chiarabba, C.; De Gori, P.; Bianchi, I.; Hill, D.P.

    2011-01-01

    We present a new P wave and S wave velocity model for the upper crust beneath Long Valley Caldera obtained using local earthquake tomography and receiver function analysis. We computed the tomographic model using both a graded inversion scheme and a traditional approach. We complement the tomographic I/P model with a teleseismic receiver function model based on data from broadband seismic stations (MLAC and MKV) located on the SE and SW margins of the resurgent dome inside the caldera. The inversions resolve (1) a shallow, high-velocity P wave anomaly associated with the structural uplift of a resurgent dome; (2) an elongated, WNW striking low-velocity anomaly (8%–10 % reduction in I/P) at a depth of 6 km (4 km below mean sea level) beneath the southern section of the resurgent dome; and (3) a broad, low-velocity volume (–5% reduction in I/P and as much as 40% reduction in I/S) in the depth interval 8–14 km (6–12 km below mean sea level) beneath the central section of the caldera. The two low-velocity volumes partially overlap the geodetically inferred inflation sources that drove uplift of the resurgent dome associated with caldera unrest between 1980 and 2000, and they likely reflect the ascent path for magma or magmatic fluids into the upper crust beneath the caldera.

  16. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    NASA Astrophysics Data System (ADS)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  17. Ultrasonic guided wave tomography of pipes: A development of new techniques for the nondestructive evaluation of cylindrical geometries and guided wave multi-mode analysis

    NASA Astrophysics Data System (ADS)

    Leonard, Kevin Raymond

    This dissertation concentrates on the development of two new tomographic techniques that enable wide-area inspection of pipe-like structures. By envisioning a pipe as a plate wrapped around upon itself, the previous Lamb Wave Tomography (LWT) techniques are adapted to cylindrical structures. Helical Ultrasound Tomography (HUT) uses Lamb-like guided wave modes transmitted and received by two circumferential arrays in a single crosshole geometry. Meridional Ultrasound Tomography (MUT) creates the same crosshole geometry with a linear array of transducers along the axis of the cylinder. However, even though these new scanning geometries are similar to plates, additional complexities arise because they are cylindrical structures. First, because it is a single crosshole geometry, the wave vector coverage is poorer than in the full LWT system. Second, since waves can travel in both directions around the circumference of the pipe, modes can also constructively and destructively interfere with each other. These complexities necessitate improved signal processing algorithms to produce accurate and unambiguous tomographic reconstructions. Consequently, this work also describes a new algorithm for improving the extraction of multi-mode arrivals from guided wave signals. Previous work has relied solely on the first arriving mode for the time-of-flight measurements. In order to improve the LWT, HUT and MUT systems reconstructions, improved signal processing methods are needed to extract information about the arrival times of the later arriving modes. Because each mode has different through-thickness displacement values, they are sensitive to different types of flaws, and the information gained from the multi-mode analysis improves understanding of the structural integrity of the inspected material. Both tomographic frequency compounding and mode sorting algorithms are introduced. It is also shown that each of these methods improve the reconstructed images both qualitatively and quantitatively.

  18. Adaptive Inverse Control for Rotorcraft Vibration Reduction

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1985-01-01

    This thesis extends the Least Mean Square (LMS) algorithm to solve the mult!ple-input, multiple-output problem of alleviating N/Rev (revolutions per minute by number of blades) helicopter fuselage vibration by means of adaptive inverse control. A frequency domain locally linear model is used to represent the transfer matrix relating the higher harmonic pitch control inputs to the harmonic vibration outputs to be controlled. By using the inverse matrix as the controller gain matrix, an adaptive inverse regulator is formed to alleviate the N/Rev vibration. The stability and rate of convergence properties of the extended LMS algorithm are discussed. It is shown that the stability ranges for the elements of the stability gain matrix are directly related to the eigenvalues of the vibration signal information matrix for the learning phase, but not for the control phase. The overall conclusion is that the LMS adaptive inverse control method can form a robust vibration control system, but will require some tuning of the input sensor gains, the stability gain matrix, and the amount of control relaxation to be used. The learning curve of the controller during the learning phase is shown to be quantitatively close to that predicted by averaging the learning curves of the normal modes. For higher order transfer matrices, a rough estimate of the inverse is needed to start the algorithm efficiently. The simulation results indicate that the factor which most influences LMS adaptive inverse control is the product of the control relaxation and the the stability gain matrix. A small stability gain matrix makes the controller less sensitive to relaxation selection, and permits faster and more stable vibration reduction, than by choosing the stability gain matrix large and the control relaxation term small. It is shown that the best selections of the stability gain matrix elements and the amount of control relaxation is basically a compromise between slow, stable convergence and fast convergence with increased possibility of unstable identification. In the simulation studies, the LMS adaptive inverse control algorithm is shown to be capable of adapting the inverse (controller) matrix to track changes in the flight conditions. The algorithm converges quickly for moderate disturbances, while taking longer for larger disturbances. Perfect knowledge of the inverse matrix is not required for good control of the N/Rev vibration. However it is shown that measurement noise will prevent the LMS adaptive inverse control technique from controlling the vibration, unless the signal averaging method presented is incorporated into the algorithm.

  19. Mathematical modeling of tomographic scanning of cylindrically shaped test objects

    NASA Astrophysics Data System (ADS)

    Kapranov, B. I.; Vavilova, G. V.; Volchkova, A. V.; Kuznetsova, I. S.

    2018-05-01

    The paper formulates mathematical relationships that describe the length of the radiation absorption band in the test object for the first generation tomographic scan scheme. A cylindrically shaped test object containing an arbitrary number of standard circular irregularities is used to perform mathematical modeling. The obtained mathematical relationships are corrected with respect to chemical composition and density of the test object material. The equations are derived to calculate the resulting attenuation radiation from cobalt-60 isotope when passing through the test object. An algorithm to calculate the radiation flux intensity is provided. The presented graphs describe the dependence of the change in the γ-quantum flux intensity on the change in the radiation source position and the scanning angle of the test object.

  20. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  1. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  2. An Improved 3D Joint Inversion Method of Potential Field Data Using Cross-Gradient Constraint and LSQR Method

    NASA Astrophysics Data System (ADS)

    Joulidehsar, Farshad; Moradzadeh, Ali; Doulati Ardejani, Faramarz

    2018-06-01

    The joint interpretation of two sets of geophysical data related to the same source is an appropriate method for decreasing non-uniqueness of the resulting models during inversion process. Among the available methods, a method based on using cross-gradient constraint combines two datasets is an efficient approach. This method, however, is time-consuming for 3D inversion and cannot provide an exact assessment of situation and extension of anomaly of interest. In this paper, the first attempt is to speed up the required calculation by substituting singular value decomposition by least-squares QR method to solve the large-scale kernel matrix of 3D inversion, more rapidly. Furthermore, to improve the accuracy of resulting models, a combination of depth-weighing matrix and compacted constraint, as automatic selection covariance of initial parameters, is used in the proposed inversion algorithm. This algorithm was developed in Matlab environment and first implemented on synthetic data. The 3D joint inversion of synthetic gravity and magnetic data shows a noticeable improvement in the results and increases the efficiency of algorithm for large-scale problems. Additionally, a real gravity and magnetic dataset of Jalalabad mine, in southeast of Iran was tested. The obtained results by the improved joint 3D inversion of cross-gradient along with compacted constraint showed a mineralised zone in depth interval of about 110-300 m which is in good agreement with the available drilling data. This is also a further confirmation on the accuracy and progress of the improved inversion algorithm.

  3. Seismic imaging: From classical to adjoint tomography

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Gu, Y. J.

    2012-09-01

    Seismic tomography has been a vital tool in probing the Earth's internal structure and enhancing our knowledge of dynamical processes in the Earth's crust and mantle. While various tomographic techniques differ in data types utilized (e.g., body vs. surface waves), data sensitivity (ray vs. finite-frequency approximations), and choices of model parameterization and regularization, most global mantle tomographic models agree well at long wavelengths, owing to the presence and typical dimensions of cold subducted oceanic lithospheres and hot, ascending mantle plumes (e.g., in central Pacific and Africa). Structures at relatively small length scales remain controversial, though, as will be discussed in this paper, they are becoming increasingly resolvable with the fast expanding global and regional seismic networks and improved forward modeling and inversion techniques. This review paper aims to provide an overview of classical tomography methods, key debates pertaining to the resolution of mantle tomographic models, as well as to highlight recent theoretical and computational advances in forward-modeling methods that spearheaded the developments in accurate computation of sensitivity kernels and adjoint tomography. The first part of the paper is devoted to traditional traveltime and waveform tomography. While these approaches established a firm foundation for global and regional seismic tomography, data coverage and the use of approximate sensitivity kernels remained as key limiting factors in the resolution of the targeted structures. In comparison to classical tomography, adjoint tomography takes advantage of full 3D numerical simulations in forward modeling and, in many ways, revolutionizes the seismic imaging of heterogeneous structures with strong velocity contrasts. For this reason, this review provides details of the implementation, resolution and potential challenges of adjoint tomography. Further discussions of techniques that are presently popular in seismic array analysis, such as noise correlation functions, receiver functions, inverse scattering imaging, and the adaptation of adjoint tomography to these different datasets highlight the promising future of seismic tomography.

  4. Solving ill-posed inverse problems using iterative deep neural networks

    NASA Astrophysics Data System (ADS)

    Adler, Jonas; Öktem, Ozan

    2017-12-01

    We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).

  5. Spectral-element simulations of wave propagation in complex exploration-industry models: Imaging and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.

  6. Wide angle reflection effects on the uncertainty in layered models travel times tomography

    NASA Astrophysics Data System (ADS)

    Majdanski, Mariusz; Bialas, Sebastian; Trzeciak, Maciej; Gaczyński, Edward; Maksym, Andrzej

    2015-04-01

    Multi-phase layered model traveltimes tomography inversions can be realised in several ways depending on the inversion path. Inverting the shape of the boundaries based on reflection data and the velocity field based on refractions could be done jointly or sequentially. We analyse an optimal inversion path based on the uncertainty analysis of the final models. Additionally, we propose to use post critical wide-angle reflections in tomographic inversions for more reliable results especially in the deeper parts of each layer. We focus on the effects of using hard to pick post critical reflections on the final model uncertainty. Our study is performed using data collected during standard vibroseis and explosive sources seismic reflection experiment focused on shale gas reservoir characterisation realised by Polish Oil and Gas Company. Our data were gathered by a standalone single component stations deployed along the whole length of the 20 km long profile, resulting in significantly longer offsets. Our piggy back recordings resulted in good quality wide angle refraction and reflection recordings clearly observable up to the offsets of 12 km.

  7. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  8. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    NASA Astrophysics Data System (ADS)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  9. A Fast Hermite Transform★

    PubMed Central

    Leibon, Gregory; Rockmore, Daniel N.; Park, Wooram; Taintor, Robert; Chirikjian, Gregory S.

    2008-01-01

    We present algorithms for fast and stable approximation of the Hermite transform of a compactly supported function on the real line, attainable via an application of a fast algebraic algorithm for computing sums associated with a three-term relation. Trade-offs between approximation in bandlimit (in the Hermite sense) and size of the support region are addressed. Numerical experiments are presented that show the feasibility and utility of our approach. Generalizations to any family of orthogonal polynomials are outlined. Applications to various problems in tomographic reconstruction, including the determination of protein structure, are discussed. PMID:20027202

  10. Reconstruction of the temperature field for inverse ultrasound hyperthermia calculations at a muscle/bone interface.

    PubMed

    Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li

    2004-02-01

    An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.

  11. Joint Inversion of 1-D Magnetotelluric and Surface-Wave Dispersion Data with an Improved Multi-Objective Genetic Algorithm and Application to the Data of the Longmenshan Fault Zone

    NASA Astrophysics Data System (ADS)

    Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao

    2018-05-01

    Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.

  12. Tomographic reconstruction of tracer gas concentration profiles in a room with the use of a single OP-FTIR and two iterative algorithms: ART and PWLS.

    PubMed

    Park, D Y; Fessler, J A; Yost, M G; Levine, S P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 x 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  13. Tomographic Reconstruction of Tracer Gas Concentration Profiles in a Room with the Use of a Single OP-FTIR and Two Iterative Algorithms: ART and PWLS.

    PubMed

    Park, Doo Y; Fessier, Jeffrey A; Yost, Michael G; Levine, Steven P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 × 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  14. Optical tomography by means of regularized MLEM

    NASA Astrophysics Data System (ADS)

    Majer, Charles L.; Urbanek, Tina; Peter, Jörg

    2015-09-01

    To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.

  15. Seismic Tomography Of The Caucasus Region

    NASA Astrophysics Data System (ADS)

    Javakhishvili, Z.; Godoladze, T.; Gok, R.; Elashvili, M.

    2007-12-01

    The Caucasus is one of the most active segments of the Alpine-Himalayan collision belt. We used the catalog data of Georgian Seismic Network to calculate the reference 1-D and 3-D P-velocity model of the Caucasus region. The analog recording period in Georgia was quite long and 17,000 events reported in the catalog between 1956 and 1990. We carefully eliminated some arrivals due to ambiguities for analog type data picking and station time corrections. We choose arrivals with comparably low residuals between observed and calculated travel times (<1 sec). We also limited our data to minimum 10 P-arrivals and maximum azimuthal gap of 180 degrees. Finally,475 events were selected with magnitude greater than 1.5 recorded by 84 stations. We obtained good resolution down to 70 km. First, we used 1-D coupled inversion algorithm (VELEST) to calculate the velocity model and the relocations. The same model convergence is observed for the mid and lower crust. The upper layer (0-10km) is observed to be sensitive to the starting model. We used vertical seismic prospecting data from boreholes in Georgia to fix upper layer velocities. We relocated all events in the region using the new reference 1- D velocity model. The 3-D coupled inversion algorithm (SIMULPS14) was applied using the 1-D reference model as a starting model. We observed very large amount of shift at horizontal directions (up to 50 km). We observed clustered events where they are well correlated with query blasts from Tkibuli mining area. We applied the resolution test to estimate the spatial resolution of the tomographic images. The results of the test indicate that the initial model is well reconstructed for all depth slices, though it is badly reconstructed for the shallowest layer (with depth = 5km). The Moho geometry beneath Caucasus has been determined reliably by the previous geophysical studies. It has a relatively large depth variation in this region from 28 to 61 km depth, according to those studies and our tomography result for the uppermost mantle (50 km) reflects this depth variation of the Moho discontinuity.

  16. A general rough-surface inversion algorithm: Theory and application to SAR data

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  17. Global-scale Joint Body and Surface Wave Tomography with Vertical Transverse Isotropy for Seismic Monitoring Applications

    NASA Astrophysics Data System (ADS)

    Simmons, Nathan; Myers, Steve

    2017-04-01

    We continue to develop more advanced models of Earth's global seismic structure with specific focus on improving predictive capabilities for future seismic events. Our most recent version of the model combines high-quality P and S wave body wave travel times and surface-wave group and phase velocities into a joint (simultaneous) inversion process to tomographically image Earth's crust and mantle. The new model adds anisotropy (known as vertical transverse isotropy) to the model, which is necessitated by the addition of surface waves to the tomographic data set. Like previous versions of the model the new model consists of 59 surfaces and 1.6 million model nodes from the surface to the core-mantle boundary, overlaying a 1-D outer and inner core model. The model architecture is aspherical and we directly incorporate Earth's expected hydrostatic shape (ellipticity and mantle stretching). We also explicitly honor surface undulations including the Moho, several internal crustal units, and the upper mantle transition zone undulations as predicated by previous studies. The explicit Earth model design allows for accurate travel time computation using our unique 3-D ray tracing algorithms, capable of 3-D ray tracing more than 20 distinct seismic phases including crustal, regional, teleseismic, and core phases. Thus, we can now incorporate certain secondary (and sometimes exotic) phases into source location determination and other analyses. New work on model uncertainty quantification assesses the error covariance of the model, which when completed will enable calculation of path-specific estimates of uncertainty for travel times computed using our previous model (LLNL-G3D-JPS) which is available to the monitoring and broader research community and we encourage external evaluation and validation. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  18. Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali

    2013-04-01

    The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.

  19. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    PubMed

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.

  20. Looking for the Signal: A guide to iterative noise and artefact removal in X-ray tomographic reconstructions of porous geomaterials

    NASA Astrophysics Data System (ADS)

    Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.

    2017-07-01

    X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.

  1. Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768

  2. Genetic algorithms and their use in Geophysical Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Paul B.

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less

  3. Genetic algorithms and their use in geophysical problems

    NASA Astrophysics Data System (ADS)

    Parker, Paul Bradley

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.

  4. Can we go From Tomographically Determined Seismic Velocities to Composition? Amplitude Resolution Issues in Local Earthquake Tomography

    NASA Astrophysics Data System (ADS)

    Wagner, L.

    2007-12-01

    There have been a number of recent papers (i.e. Lee (2003), James et al. (2004), Hacker and Abers (2004), Schutt and Lesher (2006)) which calculate predicted velocities for xenolith compositions at mantle pressures and temperatures. It is tempting, therefore, to attempt to go the other way ... to use tomographically determined absolute velocities to constrain mantle composition. However, in order to do this, it is vital that one is able to accurately constrain not only the polarity of the determined velocity deviations (i.e. fast vs slow) but also how much faster, how much slower relative to the starting model, if absolute velocities are to be so closely analyzed. While much attention has been given to issues concerning spatial resolution in seismic tomography (i.e. what areas are fast, what areas are slow), little attention has been directed at the issue of amplitude resolution (how fast, how slow). Velocity deviation amplitudes in seismic tomography are heavily influenced by the amount of regularization used and the number of iterations performed. Determining these two parameters is a difficult and little discussed problem. I explore the effect of these two parameters on the amplitudes obtained from the tomographic inversion of the Chile Argentina Geophysical Experiment (CHARGE) dataset, and attempt to determine a reasonable solution space for the low Vp, high Vs, low Vp/Vs anomaly found above the flat slab in central Chile. I then compare this solution space to the range in experimentally determined velocities for peridotite end-members to evaluate our ability to constrain composition using tomographically determined seismic velocities. I find that in general, it will be difficult to constrain the compositions of normal mantle peridotites using tomographically determined velocities, but that in the unusual case of the anomaly above the flat slab, the observed velocity structure still has an anomalously high S wave velocity and low Vp/Vs ratio that is most consistent with enstatite, but inconsistent with the predicted velocities of known mantle xenoliths.

  5. Which test for CAD should be used in patients with left bundle branch block?

    PubMed

    Xu, Bo; Cremer, Paul; Jaber, Wael; Moir, Stuart; Harb, Serge C; Rodriguez, L Leonardo

    2018-03-01

    Exercise stress electrocardiography is unreliable as a test for obstructive coronary artery disease (CAD) if the patient has left bundle branch block. The authors provide an algorithm for using alternative tests: exercise stress echocardiography, dobutamine echocardiography, computed tomographic (CT) angiography, and nuclear myocardial perfusion imaging. Copyright © 2018 Cleveland Clinic.

  6. Surface wave tomography applied to the North American upper mantle

    NASA Astrophysics Data System (ADS)

    van der Lee, Suzan; Frederiksen, Andrew

    Tomographic techniques that invert seismic surface waves for 3-D Earth structure differ in their definitions of data and the forward problem as well as in the parameterization of the tomographic model. However, all such techniques have in common that the tomographic inverse problem involves solving a large and mixed-determined set of linear equations. Consequently these inverse problems have multiple solutions and inherently undefinable accuracy. Smoother and rougher tomographic models are found with rougher (confined to great circle path) and smoother (finite-width) sensitivity kernels, respectively. A powerful, well-tested method of surface wave tomography (Partitioned Waveform Inversion) is based on inverting the waveforms of wave trains comprising regional S and surface waves from at least hundreds of seismograms for 3-D variations in S wave velocity. We apply this method to nearly 1400 seismograms recorded by digital broadband seismic stations in North America. The new 3-D S-velocity model, NA04, is consistent with previous findings that are based on separate, overlapping data sets. The merging of US and Canadian data sets, adding Canadian recordings of Mexican earthquakes, and combining fundamental-mode with higher-mode waveforms provides superior resolution, in particular in the US-Canada border region and the deep upper mantle. NA04 shows that 1) the Atlantic upper mantle is seismically faster than the Pacific upper mantle, 2) the uppermost mantle beneath Precambrian North America could be one and a half times as rigid as the upper mantle beneath Meso- and Cenozoic North America, with the upper mantle beneath Paleozoic North America being intermediate in seismic rigidity, 3) upper-mantle structure varies laterally within these geologic-age domains, and 4) the distribution of high-velocity anomalies in the deep upper mantle aligns with lower mantle images of the subducted Farallon and Kula plates and indicate that trailing fragments of these subducted oceanic plates still reside in the transition zone. The thickness of the high-velocity layer beneath Precambrian North America is estimated to be 250±70 km thick. On a smaller scale NA04 shows 1) high-velocities associated with subduction of the Pacific plate beneath the Aleutian arc, 2) the absence of expected high velocities in the upper mantle beneath the Wyoming craton, 3) a V-shaped dent below 150 km in the high-velocity cratonic lithosphere beneath New England, 4) the cratonic lithosphere beneath Precambrian North America being confined southwest of Baffin Bay, west of the Appalachians, north of the Ouachitas, east of the Rocky Mountains, and south of the Arctic Ocean, 5) the cratonic lithosphere beneath the Canadian shield having higher S-velocities than that beneath Precambrian basement that is covered with Phanerozoic sediments, 6) the lowest S velocities are concentrated beneath the Gulf of California, northern Mexico, and the Basin and Range Province.

  7. Key Generation for Fast Inversion of the Paillier Encryption Function

    NASA Astrophysics Data System (ADS)

    Hirano, Takato; Tanaka, Keisuke

    We study fast inversion of the Paillier encryption function. Especially, we focus only on key generation, and do not modify the Paillier encryption function. We propose three key generation algorithms based on the speeding-up techniques for the RSA encryption function. By using our algorithms, the size of the private CRT exponent is half of that of Paillier-CRT. The first algorithm employs the extended Euclidean algorithm. The second algorithm employs factoring algorithms, and can construct the private CRT exponent with low Hamming weight. The third algorithm is a variant of the second one, and has some advantage such as compression of the private CRT exponent and no requirement for factoring algorithms. We also propose the settings of the parameters for these algorithms and analyze the security of the Paillier encryption function by these algorithms against known attacks. Finally, we give experimental results of our algorithms.

  8. Inverse problem of radiofrequency sounding of ionosphere

    NASA Astrophysics Data System (ADS)

    Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.

    2016-01-01

    An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.

  9. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  10. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  11. DenInv3D: a geophysical software for three-dimensional density inversion of gravity field data

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Ke, Xiaoping; Wang, Yong

    2018-04-01

    This paper presents a three-dimensional density inversion software called DenInv3D that operates on gravity and gravity gradient data. The software performs inversion modelling, kernel function calculation, and inversion calculations using the improved preconditioned conjugate gradient (PCG) algorithm. In the PCG algorithm, due to the uncertainty of empirical parameters, such as the Lagrange multiplier, we use the inflection point of the L-curve as the regularisation parameter. The software can construct unequally spaced grids and perform inversions using such grids, which enables changing the resolution of the inversion results at different depths. Through inversion of airborne gradiometry data on the Australian Kauring test site, we discovered that anomalous blocks of different sizes are present within the study area in addition to the central anomalies. The software of DenInv3D can be downloaded from http://159.226.162.30.

  12. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  14. The exploration technology and application of sea surface wave

    NASA Astrophysics Data System (ADS)

    Wang, Y.

    2016-12-01

    In order to investigate the seismic velocity structure of the shallow sediments in the Bohai Sea of China, we conduct a shear-wave velocity inversion of the surface wave dispersion data from a survey of 12 ocean bottom seismometers (OBS) and 377 shots of a 9000 inch3 air gun. With OBS station spacing of 5 km and air gun shot spacing of 190 m, high-quality Rayleigh wave data were recorded by the OBSs within 0.4 5 km offset. Rayleigh wave phase velocity dispersion for the fundamental mode and first overtone in the frequency band of 0.9 3.0 Hz were retrieved with the phase-shift method and inverted for the shear-wave velocity structure of the shallow sediments with a damped iterative least-square algorithm. Pseudo 2-D shear-wave velocity profiles with depth to 400 m show coherent features of relatively weak lateral velocity variation. The uncertainty in shear-wave velocity structure was also estimated based on the pseudo 2-D profiles from 6 trial inversions with different initial models, which suggest a velocity uncertainty < 30 m/s for most parts of the 2-D profiles. The layered structure with little lateral variation may be attributable to the continuous sedimentary environment in the Cenozoic sedimentary basin of the Bohai Bay basin. The shear-wave velocity of 200 300 m/s in the top 100 m of the Bohai Sea floor may provide important information for offshore site response studies in earthquake engineering. Furthermore, the very low shear-wave velocity structure (200 700 m/s) down to 400 m depth could produce a significant travel time delay of 1 s in the S wave arrivals, which needs to be considered to avoid serious bias in S wave traveltime tomographic models.

  15. The State of Stress Beyond the Borehole

    NASA Astrophysics Data System (ADS)

    Johnson, P. A.; Coblentz, D. D.; Maceira, M.; Delorey, A. A.; Guyer, R. A.

    2015-12-01

    The state of stress controls all in-situ reservoir activities and yet we lack the quantitative means to measure it. This problem is important in light of the fact that the subsurface provides more than 80 percent of the energy used in the United States and serves as a reservoir for geological carbon sequestration, used fuel disposition, and nuclear waste storage. Adaptive control of subsurface fractures and fluid flow is a crosscutting challenge being addressed by the new Department of Energy SubTER Initiative that has the potential to transform subsurface energy production and waste storage strategies. Our methodology to address the above mentioned matter is based on a novel Advance Multi-Physics Tomographic (AMT) approach for determining the state of stress, thereby facilitating our ability to monitor and control subsurface geomechanical processes. We developed the AMT algorithm for deriving state-of-stress from integrated density and seismic velocity models and demonstrate the feasibility by applying the AMT approach to synthetic data sets to assess accuracy and resolution of the method as a function of the quality and type of geophysical data. With this method we can produce regional- to basin-scale maps of the background state of stress and identify regions where stresses are changing. Our approach is based on our major advances in the joint inversion of gravity and seismic data to obtain the elastic properties for the subsurface; and coupling afterwards the output from this joint-inversion with theoretical model such that strain (and subsequently) stress can be computed. Ultimately we will obtain the differential state of stress over time to identify and monitor critically stressed faults and evolving regions within the reservoir, and relate them to anthropogenic activities such as fluid/gas injection.

  16. Transdimensional Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Bodin, T.; Sambridge, M.

    2009-12-01

    In seismic imaging the degree of model complexity is usually determined by manually tuning damping parameters within a fixed parameterization chosen in advance. Here we present an alternative methodology for seismic travel time tomography where the model complexity is controlled automatically by the data. In particular we use a variable parametrization consisting of Voronoi cells with mobile geometry, shape and number, all treated as unknowns in the inversion. The reversible jump algorithm is used to sample the transdimensional model space within a Bayesian framework which avoids global damping procedures and the need to tune regularisation parameters. The method is an ensemble inference approach, as many potential solutions are generated with variable numbers of cells. Information is extracted from the ensemble as a whole by performing Monte Carlo integration to produce the expected Earth model. The ensemble of models can also be used to produce velocity uncertainty estimates and experiments with synthetic data suggest they represent actual uncertainty surprisingly well. In a transdimensional approach, the level of data uncertainty directly determines the model complexity needed to satisfy the data. Intriguingly, the Bayesian formulation can be extended to the case where data uncertainty is also uncertain. Experiments show that it is possible to recover data noise estimate while at the same time controlling model complexity in an automated fashion. The method is tested on synthetic data in a 2-D application and compared with a more standard matrix based inversion scheme. The method has also been applied to real data obtained from cross correlation of ambient noise where little is known about the size of the errors associated with the travel times. As an example, a tomographic image of Rayleigh wave group velocity for the Australian continent is constructed for 5s data together with uncertainty estimates.

  17. Three-dimensional body-wave model of Nepal using finite difference tomography

    NASA Astrophysics Data System (ADS)

    Ho, T. M.; Priestley, K.; Roecker, S. W.

    2017-12-01

    The processes occurring during continent-continent collision are still poorly understood. Ascertaining the seismic properties of the crust and uppermost mantle in such settings provides insight into continental rheology and geodynamics. The most active present-day continent-continent collision is that of India with Eurasia which has created the Himalayas and the Tibetan Plateau. Nepal provides an ideal laboratory for imaging the crustal processes resulting from the Indo-Eurasia collision. We build body wave models using local body wave arrivals picked at stations in Nepal deployed by the Department of Mining and Geology of Nepal. We use the tomographic inversion method of Roecker et al. [2006], the key feature of which is that the travel times are generated using a finite difference solution to the eikonal equation. The advantage of this technique is increased accuracy in the highly heterogeneous medium expected for the Himalayas. Travel times are calculated on a 3D Cartesian grid with a grid spacing of 6 km and intragrid times are estimated by trilinear interpolation. The gridded area spans a region of 80-90o longitude and 25-30o latitude. For a starting velocity model, we use IASP91. Inversion is performed using the LSQR algorithm. Since the damping parameter can have a significant effect on the final solution, we tested a range of damping parameters to fully explore its effect. Much of the seismicity is clustered to the West of Kathmandu at depths < 30 km. Small areas of strong fast wavespeeds exist in the centre of the region in the upper 30 km of the crust. At depths of 40-50 km, large areas of slow wavespeeds are present which track along the plate boundary.

  18. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  19. A 3D gantry single photon emission tomograph with hemispherical coverage for dedicated breast imaging

    NASA Astrophysics Data System (ADS)

    Tornai, Martin P.; Bowsher, James E.; Archer, Caryl N.; Peter, Jörg; Jaszczak, Ronald J.; MacDonald, Lawrence R.; Patt, Bradley E.; Iwanczyk, Jan S.

    2003-01-01

    A novel tomographic gantry was designed, built and initially evaluated for single photon emission imaging of metabolically active lesions in the pendant breast and near chest wall. Initial emission imaging measurements with breast lesions of various uptake ratios are presented. Methods: A prototype tomograph was constructed utilizing a compact gamma camera having a field-of-view of <13×13 cm 2 with arrays of 2×2×6 mm 3 quantized NaI(Tl) scintillators coupled to position sensitive PMTs. The camera was mounted on a radially oriented support with 6 cm variable radius-of-rotation. This unit is further mounted on a goniometric cradle providing polar motion, and in turn mounted on an azimuthal rotation stage capable of indefinite vertical axis-of-rotation about the central rotation axis (RA). Initial measurements with isotopic Tc-99 m (140 keV) to evaluate the system include acquisitions with various polar tilt angles about the RA. Tomographic measurements were made of a frequency and resolution cold-rod phantom filled with aqueous Tc-99 m. Tomographic and planar measurements of 0.6 and 1.0 cm diameter fillable spheres in an available ˜950 ml hemi-ellipsoidal (uncompressed) breast phantom attached to a life-size anthropomorphic torso phantom with lesion:breast-and-body:cardiac-and-liver activity concentration ratios of 11:1:19 were compared. Various photopeak energy windows from 10-30% widths were obtained, along with a 35% scatter window below a 15% photopeak window from the list mode data. Projections with all photopeak window and camera tilt conditions were reconstructed with an ordered subsets expectation maximization (OSEM) algorithm capable of reconstructing arbitrary tomographic orbits. Results: As iteration number increased for the tomographically measured data at all polar angles, contrasts increased while signal-to-noise ratios (SNRs) decreased in the expected way with OSEM reconstruction. The rollover between contrast improvement and SNR degradation of the lesion occurred at two to three iterations. The reconstructed tomographic data yielded SNRs with or without scatter correction that were >9 times better than the planar scans. There was up to a factor of ˜2.5 increase in total primary and scatter contamination in the photopeak window with increasing tilt angle from 15° to 45°, consistent with more direct line-of-sight of myocardial and liver activity with increased camera polar angle. Conclusion: This new, ultra-compact, dedicated tomographic imaging system has the potential of providing valuable, fully 3D functional information about small, otherwise indeterminate breast lesions as an adjunct to diagnostic mammography.

  20. Fine-scale structure of the San Andreas fault zone and location of the SAFOD target earthquakes

    USGS Publications Warehouse

    Thurber, C.; Roecker, S.; Zhang, H.; Baher, S.; Ellsworth, W.

    2004-01-01

    We present results from the tomographic analysis of seismic data from the Parkfield area using three different inversion codes. The models provide a consistent view of the complex velocity structure in the vicinity of the San Andreas, including a sharp velocity contrast across the fault. We use the inversion results to assess our confidence in the absolute location accuracy of a potential target earthquake. We derive two types of accuracy estimates, one based on a consideration of the location differences from the three inversion methods, and the other based on the absolute location accuracy of "virtual earthquakes." Location differences are on the order of 100-200 m horizontally and up to 500 m vertically. Bounds on the absolute location errors based on the "virtual earthquake" relocations are ??? 50 m horizontally and vertically. The average of our locations places the target event epicenter within about 100 m of the SAF surface trace. Copyright 2004 by the American Geophysical Union.

  1. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  2. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  3. Sparsity constrained split feasibility for dose-volume constraints in inverse planning of intensity-modulated photon or proton therapy

    NASA Astrophysics Data System (ADS)

    Penfold, Scott; Zalas, Rafał; Casiraghi, Margherita; Brooke, Mark; Censor, Yair; Schulte, Reinhard

    2017-05-01

    A split feasibility formulation for the inverse problem of intensity-modulated radiation therapy treatment planning with dose-volume constraints included in the planning algorithm is presented. It involves a new type of sparsity constraint that enables the inclusion of a percentage-violation constraint in the model problem and its handling by continuous (as opposed to integer) methods. We propose an iterative algorithmic framework for solving such a problem by applying the feasibility-seeking CQ-algorithm of Byrne combined with the automatic relaxation method that uses cyclic projections. Detailed implementation instructions are furnished. Functionality of the algorithm was demonstrated through the creation of an intensity-modulated proton therapy plan for a simple 2D C-shaped geometry and also for a realistic base-of-skull chordoma treatment site. Monte Carlo simulations of proton pencil beams of varying energy were conducted to obtain dose distributions for the 2D test case. A research release of the Pinnacle 3 proton treatment planning system was used to extract pencil beam doses for a clinical base-of-skull chordoma case. In both cases the beamlet doses were calculated to satisfy dose-volume constraints according to our new algorithm. Examination of the dose-volume histograms following inverse planning with our algorithm demonstrated that it performed as intended. The application of our proposed algorithm to dose-volume constraint inverse planning was successfully demonstrated. Comparison with optimized dose distributions from the research release of the Pinnacle 3 treatment planning system showed the algorithm could achieve equivalent or superior results.

  4. Some practical aspects of prestack waveform inversion using a genetic algorithm: An example from the east Texas Woodbine gas sand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallick, S.

    1999-03-01

    In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte-Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of thesemore » PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.« less

  5. RNA inverse folding using Monte Carlo tree search.

    PubMed

    Yang, Xiufeng; Yoshizoe, Kazuki; Taneda, Akito; Tsuda, Koji

    2017-11-06

    Artificially synthesized RNA molecules provide important ways for creating a variety of novel functional molecules. State-of-the-art RNA inverse folding algorithms can design simple and short RNA sequences of specific GC content, that fold into the target RNA structure. However, their performance is not satisfactory in complicated cases. We present a new inverse folding algorithm called MCTS-RNA, which uses Monte Carlo tree search (MCTS), a technique that has shown exceptional performance in Computer Go recently, to represent and discover the essential part of the sequence space. To obtain high accuracy, initial sequences generated by MCTS are further improved by a series of local updates. Our algorithm has an ability to control the GC content precisely and can deal with pseudoknot structures. Using common benchmark datasets for evaluation, MCTS-RNA showed a lot of promise as a standard method of RNA inverse folding. MCTS-RNA is available at https://github.com/tsudalab/MCTS-RNA .

  6. A sequential coalescent algorithm for chromosomal inversions

    PubMed Central

    Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M

    2013-01-01

    Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894

  7. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N 3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix,more » based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.« less

  8. A new probe using hybrid virus-dye nanoparticles for near-infrared fluorescence tomography

    NASA Astrophysics Data System (ADS)

    Wu, Changfeng; Barnhill, Hannah; Liang, Xiaoping; Wang, Qian; Jiang, Huabei

    2005-11-01

    A fluorescent probe based on bionanoparticle cowpea mosaic virus has been developed for near-infrared fluorescence tomography. A unique advantage of this probe is that over 30 dye molecules can be loaded onto each viral nanoparticle with an average diameter of 30 nm, making high local dye concentration (∼1.8 mM) possible without significant fluorescence quenching. This ability of high loading of local dye concentration would increase the signal-to-noise ratio considerably, thus sensitivity for detection. We demonstrate successful tomographic fluorescence imaging of a target containing the virus-dye nanoparticles embedded in a tissue-like phantom. Tomographic fluorescence data were obtained through a multi-channel frequency-domain system and the spatial maps of fluorescence quantum yield were recovered with a finite-element-based reconstruction algorithm.

  9. Synthetic aperture tomographic phase microscopy for 3D imaging of live cells in translational motion

    PubMed Central

    Lue, Niyom; Choi, Wonshik; Popescu, Gabriel; Badizadegan, Kamran; Dasari, Ramachandra R.; Feld, Michael S.

    2009-01-01

    We present a technique for 3D imaging of live cells in translational motion without need of axial scanning of objective lens. A set of transmitted electric field images of cells at successive points of transverse translation is taken with a focused beam illumination. Based on Hyugens’ principle, angular plane waves are synthesized from E-field images of a focused beam. For a set of synthesized angular plane waves, we apply a filtered back-projection algorithm and obtain 3D maps of refractive index of live cells. This technique, which we refer to as synthetic aperture tomographic phase microscopy, can potentially be combined with flow cytometry or microfluidic devices, and will enable high throughput acquisition of quantitative refractive index data from large numbers of cells. PMID:18825263

  10. A new apparatus for electron tomography in the scanning electron microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morandi, V., E-mail: morandi@bo.imm.cnr.it; Maccagnani, P.; Masini, L.

    2015-06-23

    The three-dimensional reconstruction of a microscopic specimen has been obtained by applying the tomographic algorithm to a set of images acquired in a Scanning Electron Microscope. This result was achieved starting from a series of projections obtained by stepwise rotating the sample under the beam raster. The Scanning Electron Microscope was operated in the scanning-transmission imaging mode, where the intensity of the transmitted electron beam is a monotonic function of the local mass-density and thickness of the specimen. The detection strategy has been implemented and tailored in order to maintain the projection requirement over the large tilt range, as requiredmore » by the tomographic workflow. A Si-based electron detector and an eucentric-rotation specimen holder have been specifically developed for the purpose.« less

  11. Three-dimensional Image Reconstruction in J-PET Using Filtered Back-projection Method

    NASA Astrophysics Data System (ADS)

    Shopa, R. Y.; Klimaszewski, K.; Kowalski, P.; Krzemień, W.; Raczyński, L.; Wiślicki, W.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    We present a method and preliminary results of the image reconstruction in the Jagiellonian PET tomograph. Using GATE (Geant4 Application for Tomographic Emission), interactions of the 511 keV photons with a cylindrical detector were generated. Pairs of such photons, flying back-to-back, originate from e+e- annihilations inside a 1-mm spherical source. Spatial and temporal coordinates of hits were smeared using experimental resolutions of the detector. We incorporated the algorithm of the 3D Filtered Back Projection, implemented in the STIR and TomoPy software packages, which differ in approximation methods. Consistent results for the Point Spread Functions of ~5/7,mm and ~9/20, mm were obtained, using STIR, for transverse and longitudinal directions, respectively, with no time of flight information included.

  12. Neural network explanation using inversion.

    PubMed

    Saad, Emad W; Wunsch, Donald C

    2007-01-01

    An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.

  13. Inverse algorithms for 2D shallow water equations in presence of wet dry fronts: Application to flood plain dynamics

    NASA Astrophysics Data System (ADS)

    Monnier, J.; Couderc, F.; Dartus, D.; Larnier, K.; Madec, R.; Vila, J.-P.

    2016-11-01

    The 2D shallow water equations adequately model some geophysical flows with wet-dry fronts (e.g. flood plain or tidal flows); nevertheless deriving accurate, robust and conservative numerical schemes for dynamic wet-dry fronts over complex topographies remains a challenge. Furthermore for these flows, data are generally complex, multi-scale and uncertain. Robust variational inverse algorithms, providing sensitivity maps and data assimilation processes may contribute to breakthrough shallow wet-dry front dynamics modelling. The present study aims at deriving an accurate, positive and stable finite volume scheme in presence of dynamic wet-dry fronts, and some corresponding inverse computational algorithms (variational approach). The schemes and algorithms are assessed on classical and original benchmarks plus a real flood plain test case (Lèze river, France). Original sensitivity maps with respect to the (friction, topography) pair are performed and discussed. The identification of inflow discharges (time series) or friction coefficients (spatially distributed parameters) demonstrate the algorithms efficiency.

  14. RF tomography of metallic objects in free space: preliminary results

    NASA Astrophysics Data System (ADS)

    Li, Jia; Ewing, Robert L.; Berdanier, Charles; Baker, Christopher

    2015-05-01

    RF tomography has great potential in defense and homeland security applications. A distributed sensing research facility is under development at Air Force Research Lab. To develop a RF tomographic imaging system for the facility, preliminary experiments have been performed in an indoor range with 12 radar sensors distributed on a circle of 3m radius. Ultra-wideband pulses are used to illuminate single and multiple metallic targets. The echoes received by distributed sensors were processed and combined for tomography reconstruction. Traditional matched filter algorithm and truncated singular value decomposition (SVD) algorithm are compared in terms of their complexity, accuracy, and suitability for distributed processing. A new algorithm is proposed for shape reconstruction, which jointly estimates the object boundary and scatter points on the waveform's propagation path. The results show that the new algorithm allows accurate reconstruction of object shape, which is not available through the matched filter and truncated SVD algorithms.

  15. Algorithmic structural segmentation of defective particle systems: a lithium-ion battery study.

    PubMed

    Westhoff, D; Finegan, D P; Shearing, P R; Schmidt, V

    2018-04-01

    We describe a segmentation algorithm that is able to identify defects (cracks, holes and breakages) in particle systems. This information is used to segment image data into individual particles, where each particle and its defects are identified accordingly. We apply the method to particle systems that appear in Li-ion battery electrodes. First, the algorithm is validated using simulated data from a stochastic 3D microstructure model, where we have full information about defects. This allows us to quantify the accuracy of the segmentation result. Then we show that the algorithm can successfully be applied to tomographic image data from real battery anodes and cathodes, which are composed of particle systems with very different morpohological properties. Finally, we show how the results of the segmentation algorithm can be used for structural analysis. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  16. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  17. Tomographic reconstruction of heat release rate perturbations induced by helical modes in turbulent swirl flames

    NASA Astrophysics Data System (ADS)

    Moeck, Jonas P.; Bourgouin, Jean-François; Durox, Daniel; Schuller, Thierry; Candel, Sébastien

    2013-04-01

    Swirl flows with vortex breakdown are widely used in industrial combustion systems for flame stabilization. This type of flow is known to sustain a hydrodynamic instability with a rotating helical structure, one common manifestation of it being the precessing vortex core. The role of this unsteady flow mode in combustion is not well understood, and its interaction with combustion instabilities and flame stabilization remains unclear. It is therefore important to assess the structure of the perturbation in the flame that is induced by this helical mode. Based on principles of tomographic reconstruction, a method is presented to determine the 3-D distribution of the heat release rate perturbation associated with the helical mode. Since this flow instability is rotating, a phase-resolved sequence of projection images of light emitted from the flame is identical to the Radon transform of the light intensity distribution in the combustor volume and thus can be used for tomographic reconstruction. This is achieved with one stationary camera only, a vast reduction in experimental and hardware requirements compared to a multi-camera setup or camera repositioning, which is typically required for tomographic reconstruction. Different approaches to extract the coherent part of the oscillation from the images are discussed. Two novel tomographic reconstruction algorithms specifically tailored to the structure of the heat release rate perturbations related to the helical mode are derived. The reconstruction techniques are first applied to an artificial field to illustrate the accuracy. High-speed imaging data acquired in a turbulent swirl-stabilized combustor setup with strong helical mode oscillations are then used to reconstruct the 3-D structure of the associated perturbation in the flame.

  18. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  19. An adaptive inverse kinematics algorithm for robot manipulators

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  20. SMI adaptive antenna arrays for weak interfering signals. [Sample Matrix Inversion

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.

    1986-01-01

    The performance of adaptive antenna arrays in the presence of weak interfering signals (below thermal noise) is studied. It is shown that a conventional adaptive antenna array sample matrix inversion (SMI) algorithm is unable to suppress such interfering signals. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is redefined such that the effect of thermal noise on the weights of adaptive arrays is reduced. Thus, the weights are dictated by relatively weak signals. It is shown that the modified algorithm provides the desired interference protection.

  1. Fast inversion of gravity data using the symmetric successive over-relaxation (SSOR) preconditioned conjugate gradient algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Zhaohai; Li, Fengting; Xu, Xuechun; Huang, Danian; Zhang, Dailei

    2017-02-01

    The subsurface three-dimensional (3D) model of density distribution is obtained by solving an under-determined linear equation that is established by gravity data. Here, we describe a new fast gravity inversion method to recover a 3D density model from gravity data. The subsurface will be divided into a large number of rectangular blocks, each with an unknown constant density. The gravity inversion method introduces a stabiliser model norm with a depth weighting function to produce smooth models. The depth weighting function is combined with the model norm to counteract the skin effect of the gravity potential field. As the numbers of density model parameters is NZ (the number of layers in the vertical subsurface domain) times greater than the observed gravity data parameters, the inverse density parameter is larger than the observed gravity data parameters. Solving the full set of gravity inversion equations is very time-consuming, and applying a new algorithm to estimate gravity inversion can significantly reduce the number of iterations and the computational time. In this paper, a new symmetric successive over-relaxation (SSOR) iterative conjugate gradient (CG) method is shown to be an appropriate algorithm to solve this Tikhonov cost function (gravity inversion equation). The new, faster method is applied on Gaussian noise-contaminated synthetic data to demonstrate its suitability for 3D gravity inversion. To demonstrate the performance of the new algorithm on actual gravity data, we provide a case study that includes ground-based measurement of residual Bouguer gravity anomalies over the Humble salt dome near Houston, Gulf Coast Basin, off the shore of Louisiana. A 3D distribution of salt rock concentration is used to evaluate the inversion results recovered by the new SSOR iterative method. In the test model, the density values in the constructed model coincide with the known location and depth of the salt dome.

  2. Classification of JET Neutron and Gamma Emissivity Profiles

    NASA Astrophysics Data System (ADS)

    Craciunescu, T.; Murari, A.; Kiptily, V.; Vega, J.; Contributors, JET

    2016-05-01

    In thermonuclear plasmas, emission tomography uses integrated measurements along lines of sight (LOS) to determine the two-dimensional (2-D) spatial distribution of the volume emission intensity. Due to the availability of only a limited number views and to the coarse sampling of the LOS, the tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET. In specific experimental conditions the availability of LOSs is restricted to a single view. In this case an explicit reconstruction of the emissivity profile is no longer possible. However, machine learning classification methods can be used in order to derive the type of the distribution. In the present approach the classification is developed using the theory of belief functions which provide the support to fuse the results of independent clustering and supervised classification. The method allows to represent the uncertainty of the results provided by different independent techniques, to combine them and to manage possible conflicts.

  3. Probing the Detailed Seismic Velocity Structure of Subduction Zones Using Advanced Seismic Tomography Methods

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Thurber, C. H.

    2005-12-01

    Subduction zones are one of the most important components of the Earth's plate tectonic system. Knowing the detailed seismic velocity structure within and around subducting slabs is vital to understand the constitution of the slab, the cause of intermediate depth earthquakes inside the slab, the fluid distribution and recycling, and tremor occurrence [Hacker et al., 2001; Obara, 2002].Thanks to the ability of double-difference tomography [Zhang and Thurber, 2003] to resolve the fine-scale structure near the source region and the favorable seismicity distribution inside many subducting slabs, it is now possible to characterize the fine details of the velocity structure and earthquake locations inside the slab, as shown in the study of the Japan subduction zone [Zhang et al., 2004]. We further develop the double-difference tomography method in two aspects: the first improvement is to use an adaptive inversion mesh rather than a regular inversion grid and the second improvement is to determine a reliable Vp/Vs structure using various strategies rather than directly from Vp and Vs [see our abstract ``Strategies to solve for a better Vp/Vs model using P and S arrival time'' at Session T29]. The adaptive mesh seismic tomography method is based on tetrahedral diagrams and can automatically adjust the inversion mesh according to the ray distribution so that the inversion mesh nodes are denser where there are more rays and vice versa [Zhang and Thurber, 2005]. As a result, the number of inversion mesh nodes is greatly reduced compared to a regular inversion grid with comparable spatial resolution, and the tomographic system is more stable and better conditioned. This improvement is quite valuable for characterizing the fine structure of the subduction zone considering the highly uneven distribution of earthquakes within and around the subducting slab. The second improvement, to determine a reliable Vp/Vs model, lies in jointly inverting Vp, Vs, and Vp/Vs using P, S, and S-P times in a manner similar to double-difference tomography. Obtaining a reliable Vp/Vs model of the subduction zone is more helpful for understanding its mechanical and petrologic properties. Our applications of the original version of double-difference tomography to several subduction zones beneath northern Honshu, Japan, the Wellington region, New Zealand, and Alaska, United States, have shown evident velocity variations within and around the subducting slab, which likely is evidence of dehydration reactions of various hydrous minerals that are hypothesized to be responsible for intermediate depth earthquakes. We will show the new velocity models for these subduction zones by applying our advanced tomographic methods.

  4. Iterative reconstruction of volumetric particle distribution

    NASA Astrophysics Data System (ADS)

    Wieneke, Bernhard

    2013-02-01

    For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.

  5. A refraction-corrected tomographic algorithm for immersion laser-ultrasonic imaging of solids with piecewise linear surface profile

    NASA Astrophysics Data System (ADS)

    Zarubin, V.; Bychkov, A.; Simonova, V.; Zhigarkov, V.; Karabutov, A.; Cherepetskaya, E.

    2018-05-01

    In this paper, a technique for reflection mode immersion 2D laser-ultrasound tomography of solid objects with piecewise linear 2D surface profiles is presented. Pulsed laser radiation was used for generation of short ultrasonic probe pulses, providing high spatial resolution. A piezofilm sensor array was used for detection of the waves reflected by the surface and internal inhomogeneities of the object. The original ultrasonic image reconstruction algorithm accounting for refraction of acoustic waves at the liquid-solid interface provided longitudinal resolution better than 100 μm in the polymethyl methacrylate sample object.

  6. The upper mantle structure of the central Rio Grande rift region from teleseismic P and S wave travel time delays and attenuation

    USGS Publications Warehouse

    Slack, P.D.; Davis, P.M.; Baldridge, W.S.; Olsen, K.H.; Glahn, A.; Achauer, U.; Spence, W.

    1996-01-01

    The lithosphere beneath a continental rift should be significantly modified due to extension. To image the lithosphere beneath the Rio Grande rift (RGR), we analyzed teleseismic travel time delays of both P and S wave arrivals and solved for the attenuation of P and S waves for four seismic experiments spanning the Rio Grande rift. Two tomographic inversions of the P wave travel time data are given: an Aki-Christofferson-Husebye (ACH) block model inversion and a downward projection inversion. The tomographic inversions reveal a NE-SW to NNE-SSW trending feature at depths of 35 to 145 km with a velocity reduction of 7 to 8% relative to mantle velocities beneath the Great Plains. This region correlates with the transition zone between the Colorado Plateau and the Rio Grande rift and is bounded on the NW by the Jemez lineament, a N52??E trending zone of late Miocene to Holocene volcanism. S wave delays plotted against P wave delays are fit with a straight line giving a slope of 3.0??0.4. This correlation and the absolute velocity reduction imply that temperatures in the lithosphere are close to the solidus, consistent with, but not requiring, the presence of partial melt in the mantle beneath the Rio Grande rift. The attenuation data could imply the presence of partial melt. We compare our results with other geophysical and geologic data. We propose that any north-south trending thermal (velocity) anomaly that may have existed in the upper mantle during earlier (Oligocene to late Miocene) phases of rifting and that may have correlated with the axis of the rift has diminished with time and has been overprinted with more recent structure. The anomalously low-velocity body presently underlying the transition zone between the core of the Colorado Plateau and the rift may reflect processes resulting from the modern (Pliocene to present) regional stress field (oriented WNW-ESE), possibly heralding future extension across the Jemez lineament and transition zone.

  7. Crustal and Upper Mantle Investigations Using Receiver Functions and Tomographic Inversion in the Southern Puna Plateau Region of the Central Andes

    NASA Astrophysics Data System (ADS)

    Heit, B.; Yuan, X.; Bianchi, M.; Jakovlev, A.; Kumar, P.; Kay, S. M.; Sandvol, E. A.; Alonso, R.; Coira, B.; Comte, D.; Brown, L. D.; Kind, R.

    2011-12-01

    We present here the results obtained using the data form our passive seismic array in the southern Puna plateau between 25°S to 28°S latitude in Argentina and Chile. In first instance we have been able to calculate P and S receiver functions in order to investigate the Moho thickness and other seismic discontinuities in the study area. The RF data shows that the northern Puna plateau has a thicker crust and that the Moho topography is more irregular along strike. The seismic structure and thickness of the continental crust and the lithospheric mantle beneath the southern Puna plateau reveals that the LAB is deeper to the north of the array suggesting lithospheric removal towards the south. Later we performed a joint inversion of teleseismic and regional tomographic data in order to study the distribution of velocity anomalies that could help us to better understand the evolution of the Andean elevated plateau and the role of lithosphere-asthenosphere interactions in this region. Low velocities are observed in correlation with young volcanic centers (e.g. Ojos del Salado, Cerro Blanco, Galan) and agree very well with the position of crustal lineaments in the region. This is suggesting a close relationship between magmatism and lithospheric structures at crustal scale coniciding with the presence of hot asthenospheric material at the base of the crust probably induced by lithospheric foundering.

  8. Code for Calculating Regional Seismic Travel Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minusmore » predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less

  9. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    NASA Technical Reports Server (NTRS)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  10. Statistical reconstruction for cosmic ray muon tomography.

    PubMed

    Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J

    2007-08-01

    Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.

  11. ɛ-subgradient algorithms for bilevel convex optimization

    NASA Astrophysics Data System (ADS)

    Helou, Elias S.; Simões, Lucas E. A.

    2017-05-01

    This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.

  12. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).

  13. A hydraulic tomography approach coupling travel time inversion with steady shape analysis based on aquifer analogue study in coarsely clastic fluvial glacial deposit

    NASA Astrophysics Data System (ADS)

    Hu, R.; Brauchler, R.; Herold, M.; Bayer, P.; Sauter, M.

    2009-04-01

    Rarely is it possible to draw a significant conclusion about the geometry and the properties of geological structures of the underground using the information which is typically obtained from boreholes, since soil exploration is only representative of the position where the soil sample is taken from. Conventional aquifer investigation methods like pumping tests can provide hydraulic properties of a larger area; however, they lead to integral information. This information is insufficient to develop groundwater models, especially contaminant transport models, which require information about the spatial distribution of the hydraulic properties of the subsurface. Hydraulic tomography is an innovative method which has the potential to spatially resolve three dimensional structures of natural aquifer bodies. The method employs hydraulic short term tests performed between two or more wells, whereby the pumped intervals (sources) and the observation points (receivers) are separated by double packer systems. In order to optimize the computationally intensive tomographic inversion of transient hydraulic data we have decided to couple two inversion approaches (a) hydraulic travel time inversion and (b) steady shape inversion. (a) Hydraulic travel time inversion is based on the solution of the travel time integral, which describes the relationship between travel time of maximum signal variation of a transient hydraulic signal and the diffusivity between source and receiver. The travel time inversion is computationally extremely effective and robust, however, it is limited to the determination of diffusivity. In order to overcome this shortcoming we use the estimated diffusivity distribution as starting model for the steady shape inversion with the goal to separate the estimated diffusivity distribution into its components, hydraulic conductivity and specific storage. (b) The steady shape inversion utilizes the fact that at steady shape conditions, drawdown varies with time but the hydraulic gradient does not. By this trick, transient data can be analyzed with the computational efficiency of a steady state model, which proceeds hundreds of times faster than transient models. Finally, a specific storage distribution can be calculated from the diffusivity and hydraulic conductivity reconstructions derived from travel time and steady shape inversion. The groundwork of this study is the aquifer-analogue study from BAYER (1999), in which six parallel profiles of a natural sedimentary body with a size of 16m x 10m x 7m were mapped in high resolution with respect to structural and hydraulic parameters. Based on these results and using geostatistical interpolation methods, MAJI (2005) designed a three dimensional hydraulic model with a resolution of 5cm x 5cm x 5cm. This hydraulic model was used to simulate a large number of short term pumping tests in a tomographical array. The high resolution parameter reconstructions gained from the inversion of simulated pumping test data demonstrate that the proposed inversion scheme allows reconstructing the individual architectural elements and their hydraulic properties with a higher resolution compared to conventional hydraulic and geological investigation methods. Bayer P (1999) Aquifer-Analog-Studium in grobklastischen braided river Ablagerungen: Sedimentäre/hydrogeologische Wandkartierung und Kalibrierung von Georadarmessungen, Diplomkartierung am Lehrstuhl für Angewandte Geologie, Universität Tübingen, 25 pp. Maji, R. (2005) Conditional Stochastic Modelling of DNAPL Migration and Dissolution in a High-resolution Aquifer Analog, Ph.D. thesis at the University of Waterloo, 187 pp.

  14. Joint refraction and reflection travel-time tomography of multichannel and wide-angle seismic data

    NASA Astrophysics Data System (ADS)

    Begovic, Slaven; Meléndez, Adrià; Ranero, César; Sallarès, Valentí

    2017-04-01

    Both near-vertical multichannel (MCS) and wide-angle (WAS) seismic data are sensitive to same properties of sampled model, but commonly they are interpreted and modeled using different approaches. Traditional MCS images provide good information on position and geometry of reflectors especially in shallow, commonly sedimentary layers, but have limited or no refracted waves, which severely hampers the retrieval of velocity information. Compared to MCS data, conventional wide-angle seismic (WAS) travel-time tomography uses sparse data (generally stations are spaced by several kilometers). While it has refractions that allow retrieving velocity information, the data sparsity makes it difficult to define velocity and the geometry of geologic boundaries (reflectors) with the appropriate resolution, especially at the shallowest crustal levels. A well-known strategy to overcome these limitations is to combine MCS and WAS data into a common inversion strategy. However, the number of available codes that can jointly invert for both types of data is limited. We have adapted the well-known and widely-used joint refraction and reflection travel-time tomography code tomo2d (Korenaga et al, 2000), and its 3D version tomo3d (Meléndez et al, 2015), to implement streamer data and multichannel acquisition geometries. This allows performing joint travel-time tomographic inversion based on refracted and reflected phases from both WAS and MCS data sets. We show with a series of synthetic tests following a layer-stripping strategy that combining these two data sets into joint travel-time tomographic method the drawbacks of each data set are notably reduced. First, we have tested traditional travel-time inversion scheme using only WAS data (refracted and reflected phases) with typical acquisition geometry with one ocean bottom seismometer (OBS) each 10 km. Second, we have jointly inverted WAS refracted and reflected phases with only streamer (MCS) reflection travel-times. And at the end we have performed joint inversion of combined refracted and reflected phases from both data sets. MCS data set (synthetic) has been produced for a 8 km-long streamer and refracted phases used for the streamer have been downward continued (projected on the seafloor). Taking advantage of high redundancy of MCS data, the definition of geometry of reflectors and velocity of uppermost layers are much improved. Additionally, long- offset wide-angle refracted phases minimize velocity-depth trade-off of reflection travel-time inversion. As a result, the obtained models have increased accuracy in both velocity and reflector's geometry as compared to the independent inversion of each data set. This is further corroborated by performing a statistical parameter uncertainty analysis to explore the effects of unknown initial model and data noise in the linearized inversion scheme.

  15. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  16. An overview of a highly versatile forward and stable inverse algorithm for airborne, ground-based and borehole electromagnetic and electric data

    NASA Astrophysics Data System (ADS)

    Auken, Esben; Christiansen, Anders Vest; Kirkegaard, Casper; Fiandaca, Gianluca; Schamper, Cyril; Behroozmand, Ahmad Ali; Binley, Andrew; Nielsen, Emil; Effersø, Flemming; Christensen, Niels Bøie; Sørensen, Kurt; Foged, Nikolaj; Vignoli, Giulio

    2015-07-01

    We present an overview of a mature, robust and general algorithm providing a single framework for the inversion of most electromagnetic and electrical data types and instrument geometries. The implementation mainly uses a 1D earth formulation for electromagnetics and magnetic resonance sounding (MRS) responses, while the geoelectric responses are both 1D and 2D and the sheet's response models a 3D conductive sheet in a conductive host with an overburden of varying thickness and resistivity. In all cases, the focus is placed on delivering full system forward modelling across all supported types of data. Our implementation is modular, meaning that the bulk of the algorithm is independent of data type, making it easy to add support for new types. Having implemented forward response routines and file I/O for a given data type provides access to a robust and general inversion engine. This engine includes support for mixed data types, arbitrary model parameter constraints, integration of prior information and calculation of both model parameter sensitivity analysis and depth of investigation. We present a review of our implementation and methodology and show four different examples illustrating the versatility of the algorithm. The first example is a laterally constrained joint inversion (LCI) of surface time domain induced polarisation (TDIP) data and borehole TDIP data. The second example shows a spatially constrained inversion (SCI) of airborne transient electromagnetic (AEM) data. The third example is an inversion and sensitivity analysis of MRS data, where the electrical structure is constrained with AEM data. The fourth example is an inversion of AEM data, where the model is described by a 3D sheet in a layered conductive host.

  17. Improvement of Forest Height Retrieval By Integration of Dual-Baseline PolInSAR Data And External DEM Data

    NASA Astrophysics Data System (ADS)

    Xie, Q.; Wang, C.; Zhu, J.; Fu, H.; Wang, C.

    2015-06-01

    In recent years, a lot of studies have shown that polarimetric synthetic aperture radar interferometry (PolInSAR) is a powerful technique for forest height mapping and monitoring. However, few researches address the problem of terrain slope effect, which will be one of the major limitations for forest height inversion in mountain forest area. In this paper, we present a novel forest height retrieval algorithm by integration of dual-baseline PolInSAR data and external DEM data. For the first time, we successfully expand the S-RVoG (Sloped-Random Volume over Ground) model for forest parameters inversion into the case of dual-baseline PolInSAR configuration. In this case, the proposed method not only corrects terrain slope variation effect efficiently, but also involves more observations to improve the accuracy of parameters inversion. In order to demonstrate the performance of the inversion algorithm, a set of quad-pol images acquired at the P-band in interferometric repeat-pass mode by the German Aerospace Center (DLR) with the Experimental SAR (E-SAR) system, in the frame of the BioSAR2008 campaign, has been used for the retrieval of forest height over Krycklan boreal forest in northern Sweden. At the same time, a high accuracy external DEM in the experimental area has been collected for computing terrain slope information, which subsequently is used as an inputting parameter in the S-RVoG model. Finally, in-situ ground truth heights in stand-level have been collected to validate the inversion result. The preliminary results show that the proposed inversion algorithm promises to provide much more accurate estimation of forest height than traditional dualbaseline inversion algorithms.

  18. Mantle P wave travel time tomography of Eastern and Southern Africa: New images of mantle upwellings

    NASA Astrophysics Data System (ADS)

    Benoit, M. H.; Li, C.; van der Hilst, R.

    2006-12-01

    Much of Eastern Africa, including Ethiopia, Kenya, and Tanzania, has undergone extensive tectonism, including rifting, uplift, and volcanism during the Cenozoic. The cause of this tectonism is often attributed to the presence of one or more mantle upwellings, including starting thermal plumes and superplumes. Previous regional seismic studies and global tomographic models show conflicting results regarding the spatial and thermal characteristics of these upwellings. Additionally, there are questions concerning the extent to which the Archean and Proterozoic lithosphere has been altered by possible thermal upwellings in the mantle. To further constrain the mantle structure beneath Southern and Eastern Africa and to investigate the origin of the tectonism in Eastern Africa, we present preliminary results of a large-scale P wave travel time tomographic study of the region. We invert travel time measurements from the EHB database with travel time measurements taken from regional PASSCAL datasets including the Ethiopia Broadband Seismic Experiment (2000-2002); Kenya Broadband Seismic Experiment (2000-2002); Southern Africa Seismic Experiment (1997- 1999); Tanzania Broadband Seismic Experiment (1995-1997), and the Saudi Arabia PASSCAL Experiment (1995-1997). The tomographic inversion uses 3-D sensitivity kernels to combine different datasets and is parameterized with an irregular grid so that high spatial resolution can be obtained in areas of dense data coverage. It uses an adaptive least-squares context using the LSQR method with norm and gradient damping.

  19. Towards a Full Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2015-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.

  20. Subtalar joint stress imaging with tomosynthesis.

    PubMed

    Teramoto, Atsushi; Watanabe, Kota; Takashima, Hiroyuki; Yamashita, Toshihiko

    2014-06-01

    The purpose of this study was to perform stress imaging of hindfoot inversion and eversion using tomosynthesis and to assess the subtalar joint range of motion (ROM) of healthy subjects. The subjects were 15 healthy volunteers with a mean age of 29.1 years. Coronal tomosynthesis stress imaging of the subtalar joint was performed in a total of 30 left and right ankles. A Telos stress device was used for the stress load, and the load was 150 N for both inversion and eversion. Tomographic images in which the posterior talocalcaneal joint could be confirmed on the neutral position images were used in measurements. The angle of the intersection formed by a line through the lateral articular facet of the posterior talocalcaneal joint and a line through the surface of the trochlea of the talus was measured. The mean change in the angle of the calcaneus with respect to the talus was 10.3 ± 4.8° with inversion stress and 5.0 ± 3.8° with eversion stress from the neutral position. The result was a clearer depiction of the subtalar joint, and inversion and eversion ROM of the subtalar joint was shown to be about 15° in healthy subjects. Diagnostic, Level IV.

  1. Construction of the seismic wave-speed model by adjoint tomography beneath the Japanese metropolitan area

    NASA Astrophysics Data System (ADS)

    Miyoshi, Takayuki

    2017-04-01

    The Japanese metropolitan area has high risks of earthquakes and volcanoes associated with convergent tectonic plates. It is important to clarify detail three-dimensional structure for understanding tectonics and predicting strong motion. Classical tomographic studies based on ray theory have revealed seismotectonics and volcanic tectonics in the region, however it is unknown whether their models reproduce observed seismograms. In the present study, we construct new seismic wave-speed model by using waveform inversion. Adjoint tomography and the spectral element method (SEM) were used in the inversion (e.g. Tape et al. 2009; Peter et al. 2011). We used broadband seismograms obtained at NIED F-net stations for 140 earthquakes occurred beneath the Kanto district. We selected four frequency bands between 5 and 30 sec and used from the seismograms of longer period bands for the inversion. Tomographic iteration was conducted until obtaining the minimized misfit between data and synthetics. Our SEM model has 16 million grid points that covers the metropolitan area of the Kanto district. The model parameters were the Vp and Vs of the grid points, and density and attenuation were updated to new values depending on new Vs in each iteration. The initial model was assumed the tomographic model (Matsubara and Obara 2011) based on ray theory. The source parameters were basically used from F-net catalog, while the centroid times were inferred from comparison between data and synthetics. We simulated the forward and adjoint wavefields of each event and obtained Vp and Vs misfit kernels from their interaction. Large computation was conducted on K computer, RIKEN. We obtained final model (m16) after 16 iterations in the present study. For the waveform improvement, it is clearly shown that m16 is better than the initial model, and the seismograms especially improved in the frequency bands of longer than 8 sec and changed better for seismograms of the events occurred at deeper than a depth of 30 km. We found distinct low wave-speed patterns in S-wave structure. One of the patterns extends in the E-W direction around a depth of 40 km. This zone was interpreted as the serpentinized mantle above the Philippine Sea slab (e.g. Kamiya and Kobayashi 2000). We also obtained the low wave-speed zone around the depth of 5 km. It seems this area extends along the Median tectonic line and this area is correspond to the sedimentary layer. We thank the NIED for providing seismic data, and also thank the researchers for providing the SPECFEM Cartesian program package.

  2. Recursive inverse factorization.

    PubMed

    Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N

    2008-03-14

    A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.

  3. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    NASA Astrophysics Data System (ADS)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.

  4. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    NASA Astrophysics Data System (ADS)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.

  5. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  6. Tomographic diagnostic of the hydrogen beam from a negative ion source

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Brombin, M.; Serianni, G.; Pasqualotto, R.

    2011-10-01

    In this paper the tomographic diagnostic developed to characterize the 2D density distribution of a particle beam from a negative ion source is described. In particular, the reliability of this diagnostic has been tested by considering the geometry of the source for the production of ions of deuterium extracted from an rf plasma (SPIDER). SPIDER is a low energy prototype negative ion source for the international thermonuclear experimental reactor (ITER) neutral beam injector, aimed at demonstrating the capability to create and extract a current of D- (H-) ions up to 50 A (60 A) accelerated at 100 kV. The ions are extracted over a wide surface (1.52×0.56m2) with a uniform plasma density which is prescribed to remain within 10% of the mean value. The main target of the tomographic diagnostic is the measurement of the beam uniformity with sufficient spatial resolution and of its evolution throughout the pulse duration. To reach this target, a tomographic algorithm based on the simultaneous algebraic reconstruction technique is developed and the geometry of the lines of sight is optimized so as to cover the whole area of the beam. Phantoms that reproduce different experimental beam configurations are simulated and reconstructed, and the role of the noise in the signals is studied. The simulated phantoms are correctly reconstructed and their two-dimensional spatial nonuniformity is correctly estimated, up to a noise level of 10% with respect to the signal.

  7. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  8. Implementation of a cone-beam backprojection algorithm on the cell broadband engine processor

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Knaup, Michael; Kachelrieß, Marc

    2007-03-01

    Tomographic image reconstruction is computationally very demanding. In all cases the backprojection represents the performance bottleneck due to the high operational count and due to the high demand put on the memory subsystem. In the past, solving this problem has lead to the implementation of specific architectures, connecting Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to memory through dedicated high speed busses. More recently, there have also been attempt to use Graphic Processing Units (GPUs) to perform the backprojection step. Originally aimed at the gaming market, IBM, Toshiba and Sony have introduced the Cell Broadband Engine (CBE) processor, often considered as a multicomputer on a chip. Clocked at 3 GHz, the Cell allows for a theoretical performance of 192 GFlops and a peak data transfer rate over the internal bus of 200 GB/s. This performance indeed makes the Cell a very attractive architecture for implementing tomographic image reconstruction algorithms. In this study, we investigate the relative performance of a perspective backprojection algorithm when implemented on a standard PC and on the Cell processor. We compare these results to the performance achievable with FPGAs based boards and high end GPUs. The cone-beam backprojection performance was assessed by backprojecting a full circle scan of 512 projections of 1024x1024 pixels into a volume of size 512x512x512 voxels. It took 3.2 minutes on the PC (single CPU) and is as fast as 13.6 seconds on the Cell.

  9. Tomographic image reconstruction using the cell broadband engine (CBE) general purpose hardware

    NASA Astrophysics Data System (ADS)

    Knaup, Michael; Steckmann, Sven; Bockenbach, Olivier; Kachelrieß, Marc

    2007-02-01

    Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data, PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited by the memory bandwidth. Recently, a novel general purpose architecture optimized for distributed computing became available: the Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock). To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor. 1-3 In addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform statistical image reconstructions like the ordered subset convex (OSC) algorithm. 4 Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements. The data were backprojected into an image of 512 3 voxels using our PC-based approaches and the new CBE- based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency. On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp reconstruction, followed by 4 OSC iterations.

  10. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  11. Algorithms and Architectures for Elastic-Wave Inversion Final Report CRADA No. TC02144.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, S.; Lindtjorn, O.

    2017-08-15

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Schlumberger Technology Corporation (STC), to perform a computational feasibility study that investigates hardware platforms and software algorithms applicable to STC for Reverse Time Migration (RTM) / Reverse Time Inversion (RTI) of 3-D seismic data.

  12. Particle Swarm Optimization algorithms for geophysical inversion, practical hints

    NASA Astrophysics Data System (ADS)

    Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.

    2008-12-01

    PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.

  13. 4D inversion of time-lapse magnetotelluric data sets for monitoring geothermal reservoir

    NASA Astrophysics Data System (ADS)

    Nam, Myung Jin; Song, Yoonho; Jang, Hannuree; Kim, Bitnarae

    2017-06-01

    The productivity of a geothermal reservoir, which is a function of the pore-space and fluid-flow path of the reservoir, varies since the properties of the reservoir changes with geothermal reservoir production. Because the variation in the reservoir properties causes changes in electrical resistivity, time-lapse (TL) three-dimensional (3D) magnetotelluric (MT) methods can be applied to monitor the productivity variation of a geothermal reservoir thanks to not only its sensitivity to the electrical resistivity but also its deep depth of survey penetration. For an accurate interpretation of TL MT-data sets, a four-dimensional (4D) MT inversion algorithm has been developed to simultaneously invert all vintage data considering time-coupling between vintages. However, the changes in electrical resistivity of deep geothermal reservoirs are usually small generating minimum variation in TL MT responses. Maximizing the sensitivity of inversion to the changes in resistivity is critical in the success of 4D MT inversion. Thus, we further developed a focused 4D MT inversion method by considering not only the location of a reservoir but also the distribution of newly-generated fractures during the production. For the evaluation of the 4D MT algorithm, we tested our 4D inversion algorithms using synthetic TL MT-data sets.

  14. Magnetic particle imaging: from proof of principle to preclinical applications

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Gdaniec, N.; Möddel, M.

    2017-07-01

    Tomographic imaging has become a mandatory tool for the diagnosis of a majority of diseases in clinical routine. Since each method has its pros and cons, a variety of them is regularly used in clinics to satisfy all application needs. Magnetic particle imaging (MPI) is a relatively new tomographic imaging technique that images magnetic nanoparticles with a high spatiotemporal resolution in a quantitative way, and in turn is highly suited for vascular and targeted imaging. MPI was introduced in 2005 and now enters the preclinical research phase, where medical researchers get access to this new technology and exploit its potential under physiological conditions. Within this paper, we review the development of MPI since its introduction in 2005. Besides an in-depth description of the basic principles, we provide detailed discussions on imaging sequences, reconstruction algorithms, scanner instrumentation and potential medical applications.

  15. Investigation of the reconstruction accuracy of guided wave tomography using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Rao, Jing; Ratassepp, Madis; Fan, Zheng

    2017-07-01

    Guided wave tomography is a promising tool to accurately determine the remaining wall thicknesses of corrosion damages, which are among the major concerns for many industries. Full Waveform Inversion (FWI) algorithm is an attractive guided wave tomography method, which uses a numerical forward model to predict the waveform of guided waves when propagating through corrosion defects, and an inverse model to reconstruct the thickness map from the ultrasonic signals captured by transducers around the defect. This paper discusses the reconstruction accuracy of the FWI algorithm on plate-like structures by using simulations as well as experiments. It was shown that this algorithm can obtain a resolution of around 0.7 wavelengths for defects with smooth depth variations from the acoustic modeling data, and about 1.5-2 wavelengths from the elastic modeling data. Further analysis showed that the reconstruction accuracy is also dependent on the shape of the defect. It was demonstrated that the algorithm maintains the accuracy in the case of multiple defects compared to conventional algorithms based on Born approximation.

  16. Stress wave velocity patterns in the longitudinal-radial plane of trees for defect diagnosis

    Treesearch

    Guanghui Li; Xiang Weng; Xiaocheng Du; Xiping Wang; Hailin Feng

    2016-01-01

    Acoustic tomography for urban tree inspection typically uses stress wave data to reconstruct tomographic images for the trunk cross section using interpolation algorithm. This traditional technique does not take into account the stress wave velocity patterns along tree height. In this study, we proposed an analytical model for the wave velocity in the longitudinal–...

  17. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    NASA Astrophysics Data System (ADS)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-01

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.

  18. A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.

    PubMed

    Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin

    2014-01-01

    This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.

  19. 2.5D complex resistivity modeling and inversion using unstructured grids

    NASA Astrophysics Data System (ADS)

    Xu, Kaijun; Sun, Jie

    2016-04-01

    The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).

  20. Slab seismicity in the Western Hellenic Subduction Zone: Constraints from tomography and double-difference relocation

    NASA Astrophysics Data System (ADS)

    Halpaap, Felix; Rondenay, Stéphane; Ottemöller, Lars

    2016-04-01

    The Western Hellenic subduction zone is characterized by a transition from oceanic to continental subduction. In the southern oceanic portion of the system, abundant seismicity reaches intermediate depths of 100-120 km, while the northern continental portion rarely exhibits deep earthquakes. Our study aims to investigate how this oceanic-continental transition affects fluid release and related seismicity along strike, by focusing on the distribution of intermediate depth earthquakes. To obtain a detailed image of the seismicity, we carry out a tomographic inversion for P- and S-velocities and double-difference earthquake relocation using a dataset of unprecedented spatial coverage in this area. Here we present results of these analyses in conjunction with high-resolution profiles from migrated receiver function images obtained from the MEDUSA experiment. We generate tomographic models by inverting data from 237 manually picked, well locatable events recorded at up to 130 stations. Stations from the permanent Greek network and the EGELADOS experiment supplement the 3-D coverage of the modeled domain, which covers a large part of mainland Greece and surrounding offshore areas. Corrections for the sphericity of the Earth and our update to the SIMULR16 package, which now allows S-inversion, help improve our previous models. Flexible gridding focusses the inversion on the domains of highest gradient around the slab, and we evaluate the resolution with checker board tests. We use the resulting velocity model to relocate earthquakes via the Double-Difference method, using a large dataset of differential traveltimes obtained by crosscorrelation of seismograms. Tens of earthquakes align along two planes forming a double seismic zone in the southern, oceanic portion of the subduction zone. With increasing subduction depth, the earthquakes appear closer to the center of the slab, outlining probable deserpentinization of the slab and concomitant eclogitization of dry crustal rocks. Against expectations, we relocate one robust deep event at ≈70 km depth in the northern, continental part of the subduction zone.

  1. Probing numerical Laplace inversion methods for two and three-site molecular exchange between interconnected pore structures.

    PubMed

    Silletta, Emilia V; Franzoni, María B; Monti, Gustavo A; Acosta, Rodolfo H

    2018-01-01

    Two-dimension (2D) Nuclear Magnetic Resonance relaxometry experiments are a powerful tool extensively used to probe the interaction among different pore structures, mostly in inorganic systems. The analysis of the collected experimental data generally consists of a 2D numerical inversion of time-domain data where T 2 -T 2 maps are generated. Through the years, different algorithms for the numerical inversion have been proposed. In this paper, two different algorithms for numerical inversion are tested and compared under different conditions of exchange dynamics; the method based on Butler-Reeds-Dawson (BRD) algorithm and the fast-iterative shrinkage-thresholding algorithm (FISTA) method. By constructing a theoretical model, the algorithms were tested for a two- and three-site porous media, varying the exchange rates parameters, the pore sizes and the signal to noise ratio. In order to test the methods under realistic experimental conditions, a challenging organic system was chosen. The molecular exchange rates of water confined in hierarchical porous polymeric networks were obtained, for a two- and three-site porous media. Data processed with the BRD method was found to be accurate only under certain conditions of the exchange parameters, while data processed with the FISTA method is precise for all the studied parameters, except when SNR conditions are extreme. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal or less than 300x300x300 nodes, and it under-samples the wavefield reducing the number of stored time-steps by an order of magnitude. For bigger models the wavefield is stored only at the boundaries of the model and then re-injected while the residuals are backpropagated allowing to compute the correlation 'on the fly'. In terms of computational resource, the elastic code is an order of magnitude more demanding than the equivalent acoustic code. We have combined shared memory with distributed memory parallelisation using OpenMP and MPI respectively. Thus, we take advantage of the increasingly common multi-core architecture processors. We have successfully applied our inversion algorithm to different realistic complex 3D models. The models had non-linear relations between pressure and shear wave velocities. The shorter wavelengths of the shear waves improve the resolution of the images obtained with respect to a purely acoustic approach.

  3. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  4. Inferring global upper-mantle shear attenuation structure by waveform tomography using the spectral element method

    NASA Astrophysics Data System (ADS)

    Karaoǧlu, Haydar; Romanowicz, Barbara

    2018-06-01

    We present a global upper-mantle shear wave attenuation model that is built through a hybrid full-waveform inversion algorithm applied to long-period waveforms, using the spectral element method for wavefield computations. Our inversion strategy is based on an iterative approach that involves the inversion for successive updates in the attenuation parameter (δ Q^{-1}_μ) and elastic parameters (isotropic velocity VS, and radial anisotropy parameter ξ) through a Gauss-Newton-type optimization scheme that employs envelope- and waveform-type misfit functionals for the two steps, respectively. We also include source and receiver terms in the inversion steps for attenuation structure. We conducted a total of eight iterations (six for attenuation and two for elastic structure), and one inversion for updates to source parameters. The starting model included the elastic part of the relatively high-resolution 3-D whole mantle seismic velocity model, SEMUCB-WM1, which served to account for elastic focusing effects. The data set is a subset of the three-component surface waveform data set, filtered between 400 and 60 s, that contributed to the construction of the whole-mantle tomographic model SEMUCB-WM1. We applied strict selection criteria to this data set for the attenuation iteration steps, and investigated the effect of attenuation crustal structure on the retrieved mantle attenuation structure. While a constant 1-D Qμ model with a constant value of 165 throughout the upper mantle was used as starting model for attenuation inversion, we were able to recover, in depth extent and strength, the high-attenuation zone present in the depth range 80-200 km. The final 3-D model, SEMUCB-UMQ, shows strong correlation with tectonic features down to 200-250 km depth, with low attenuation beneath the cratons, stable parts of continents and regions of old oceanic crust, and high attenuation along mid-ocean ridges and backarcs. Below 250 km, we observe strong attenuation in the southwestern Pacific and eastern Africa, while low attenuation zones fade beneath most of the cratons. The strong negative correlation of Q^{-1}_μ and VS anomalies at shallow upper-mantle depths points to a common dominant origin for the two, likely due to variations in thermal structure. A comparison with two other global upper-mantle attenuation models shows promising consistency. As we updated the elastic 3-D model in alternate iterations, we found that the VS part of the model was stable, while the ξ structure evolution was more pronounced, indicating that it may be important to include 3-D attenuation effects when inverting for ξ, possibly due to the influence of dispersion corrections on this less well-constrained parameter.

  5. Wavefield complexity and stealth structures: Resolution constraints by wave physics

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Leng, K.

    2017-12-01

    Imaging the Earth's interior relies on understanding how waveforms encode information from heterogeneous multi-scale structure. This relation is given by elastodynamics, but forward modeling in the context of tomography primarily serves to deliver synthetic waveforms and gradients for the inversion procedure. While this is entirely appropriate, it depreciates a wealth of complementary inference that can be obtained from the complexity of the wavefield. Here, we are concerned with the imprint of realistic multi-scale Earth structure on the wavefield, and the question on the inherent physical resolution limit of structures encoded in seismograms. We identify parameter and scattering regimes where structures remain invisible as a function of seismic wavelength, structural multi-scale geometry, scattering strength, and propagation path. Ultimately, this will aid in interpreting tomographic images by acknowledging the scope of "forgotten" structures, and shall offer guidance for optimising the selection of seismic data for tomography. To do so, we use our novel 3D modeling method AxiSEM3D which tackles global wave propagation in visco-elastic, anisotropic 3D structures with undulating boundaries at unprecedented resolution and efficiency by exploiting the inherent azimuthal smoothness of wavefields via a coupled Fourier expansion-spectral-element approach. The method links computational cost to wavefield complexity and thereby lends itself well to exploring the relation between waveforms and structures. We will show various examples of multi-scale heterogeneities which appear or disappear in the waveform, and argue that the nature of the structural power spectrum plays a central role in this. We introduce the concept of wavefield learning to examine the true wavefield complexity for a complexity-dependent modeling framework and discriminate which scattering structures can be retrieved by surface measurements. This leads to the question of physical invisibility and the tomographic resolution limit, and offers insight as to why tomographic images still show stark differences for smaller-scale heterogeneities despite progress in modeling and data resolution. Finally, we give an outlook on how we expand this modeling framework towards an inversion procedure guided by wavefield complexity.

  6. Imaging of the Galapagos Plume Using a Network of Mermaids

    NASA Astrophysics Data System (ADS)

    Nolet, G.; Hello, Y.; Chen, J.; Pazmino, A.; Van der Lee, S.; Bonnieux, S.; Deschamps, A.; Regnier, M. M.; Font, Y.; Simons, F.

    2017-12-01

    A network of nine submarine seismographs (Mermaids) has been floating freely from 2014 to 2016 around the Galapagos islands, with the aim to enhance the resolving power of deep tomographic images of the mantle plume in this region (see poster by Hello et al. in session S002 for technical details).Analysing a total of 1329 triggered signals transmitted by satellite, we were able to pick the onset times of 434 P waves, 95 PKP and 26 pP arrivals. For the events recorded by at least one Mermaid, these data were complemented with hand-picked onsets from stations on the islands, or on the continent nearby, for a total of 3892 onset times of rays crossing the mantle beneath the Galapagos, many of them with a small standard error estimated at 0.3s. These data are used in a local inversion using ray theory, as is appropriate for onset times. To compensate for delays acquired in the rest of the Earth, the local model is embedded in a global inversion of P delays from the EHB data set most recently published by the ISC for 2000-2003. By selecting a strongly redundant subset of more than one million EHB P wave arrivals, we determined an objective standard error for these delays of 0.51s using the method of Voronin et al. (GJI, 2014). Using a combination of (strong) smoothing and (weak) damping, we force the tomographic model to fit the data close to the level of the estimated standard errors.Preliminary images obtained at the time of writing of this abstract indicate a deep reaching plume that is stronger in the lower mantle than near the surface.Most importantly, the experiment shows how even a limited number of Mermaids can contribute a significant gain in resolution. This is a direct consequence of the fact that they float with abyssal currents, thus avoiding redundancy in raypaths even for aftershocks.The final tomographic images and an analysis of their significance will be subject of the presentation.

  7. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  8. Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations.

    PubMed

    Borbély, Bence J; Szolgay, Péter

    2017-01-17

    Model based analysis of human upper limb movements has key importance in understanding the motor control processes of our nervous system. Various simulation software packages have been developed over the years to perform model based analysis. These packages provide computationally intensive-and therefore off-line-solutions to calculate the anatomical joint angles from motion captured raw measurement data (also referred as inverse kinematics). In addition, recent developments in inertial motion sensing technology show that it may replace large, immobile and expensive optical systems with small, mobile and cheaper solutions in cases when a laboratory-free measurement setup is needed. The objective of the presented work is to extend the workflow of measurement and analysis of human arm movements with an algorithm that allows accurate and real-time estimation of anatomical joint angles for a widely used OpenSim upper limb kinematic model when inertial sensors are used for movement recording. The internal structure of the selected upper limb model is analyzed and used as the underlying platform for the development of the proposed algorithm. Based on this structure, a prototype marker set is constructed that facilitates the reconstruction of model-based joint angles using orientation data directly available from inertial measurement systems. The mathematical formulation of the reconstruction algorithm is presented along with the validation of the algorithm on various platforms, including embedded environments. Execution performance tables of the proposed algorithm show significant improvement on all tested platforms. Compared to OpenSim's Inverse Kinematics tool 50-15,000x speedup is achieved while maintaining numerical accuracy. The proposed algorithm is capable of real-time reconstruction of standardized anatomical joint angles even in embedded environments, establishing a new way for complex applications to take advantage of accurate and fast model-based inverse kinematics calculations.

  9. Calculating tissue shear modulus and pressure by 2D log-elastographic methods

    NASA Astrophysics Data System (ADS)

    McLaughlin, Joyce R.; Zhang, Ning; Manduca, Armando

    2010-08-01

    Shear modulus imaging, often called elastography, enables detection and characterization of tissue abnormalities. In this paper the data are two displacement components obtained from successive MR or ultrasound data sets acquired while the tissue is excited mechanically. A 2D plane strain elastic model is assumed to govern the 2D displacement, u. The shear modulus, μ, is unknown and whether or not the first Lamé parameter, λ, is known the pressure p = λ∇ sdot u which is present in the plane strain model cannot be measured and is unreliably computed from measured data and can be shown to be an order one quantity in the units kPa. So here we present a 2D log-elastographic inverse algorithm that (1) simultaneously reconstructs the shear modulus, μ, and p, which together satisfy a first-order partial differential equation system, with the goal of imaging μ (2) controls potential exponential growth in the numerical error and (3) reliably reconstructs the quantity p in the inverse algorithm as compared to the same quantity computed with a forward algorithm. This work generalizes the log-elastographic algorithm in Lin et al (2009 Inverse Problems 25) which uses one displacement component, is derived assuming that the component satisfies the wave equation and is tested on synthetic data computed with the wave equation model. The 2D log-elastographic algorithm is tested on 2D synthetic data and 2D in vivo data from Mayo Clinic. We also exhibit examples to show that the 2D log-elastographic algorithm improves the quality of the recovered images as compared to the log-elastographic and direct inversion algorithms.

  10. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  11. Digital Oblique Remote Ionospheric Sensing (DORIS) Program Development

    DTIC Science & Technology

    1992-04-01

    waveforms. A new with the ARTIST software (Reinisch and Iluang. autoscaling technique for oblique ionograms 1983, Gamache et al., 1985) which is...development and performance of a complete oblique ionogram autoscaling and inversion algorithm is presented. The inver.i-,n algorithm uses a three...OTIH radar. 14. SUBJECT TERMS 15. NUMBER OF PAGES Oblique Propagation; Oblique lonogram Autoscaling ; i Electron Density Profile Inversion; Simulated 16

  12. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Bailey, Sean W.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    In clear shallow waters, light that is transmitted downward through the water column can reflect off the sea floor and thereby influence the water-leaving radiance signal. This effect can confound contemporary ocean color algorithms designed for deep waters where the seafloor has little or no effect on the water-leaving radiance. Thus, inappropriate use of deep water ocean color algorithms in optically shallow regions can lead to inaccurate retrievals of inherent optical properties (IOPs) and therefore have a detrimental impact on IOP-based estimates of marine parameters, including chlorophyll-a and the diffuse attenuation coefficient. In order to improve IOP retrievals in optically shallow regions, a semi-analytical inversion algorithm, the Shallow Water Inversion Model (SWIM), has been developed. Unlike established ocean color algorithms, SWIM considers both the water column depth and the benthic albedo. A radiative transfer study was conducted that demonstrated how SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Properties algorithm (GIOP) and Quasi-Analytical Algorithm (QAA), performed in optically deep and shallow scenarios. The results showed that SWIM performed well, whilst both GIOP and QAA showed distinct positive bias in IOP retrievals in optically shallow waters. The SWIM algorithm was also applied to a test region: the Great Barrier Reef, Australia. Using a single test scene and time series data collected by NASA's MODIS-Aqua sensor (2002-2013), a comparison of IOPs retrieved by SWIM, GIOP and QAA was conducted.

  13. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  14. A combined direct/inverse three-dimensional transonic wing design method for vector computers

    NASA Technical Reports Server (NTRS)

    Weed, R. A.; Carlson, L. A.; Anderson, W. K.

    1984-01-01

    A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

  15. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  16. A Generic 1D Forward Modeling and Inversion Algorithm for TEM Sounding with an Arbitrary Horizontal Loop

    NASA Astrophysics Data System (ADS)

    Li, Zhanhui; Huang, Qinghua; Xie, Xingbing; Tang, Xingong; Chang, Liao

    2016-08-01

    We present a generic 1D forward modeling and inversion algorithm for transient electromagnetic (TEM) data with an arbitrary horizontal transmitting loop and receivers at any depth in a layered earth. Both the Hankel and sine transforms required in the forward algorithm are calculated using the filter method. The adjoint-equation method is used to derive the formulation of data sensitivity at any depth in non-permeable media. The inversion algorithm based on this forward modeling algorithm and sensitivity formulation is developed using the Gauss-Newton iteration method combined with the Tikhonov regularization. We propose a new data-weighting method to minimize the initial model dependence that enhances the convergence stability. On a laptop with a CPU of i7-5700HQ@3.5 GHz, the inversion iteration of a 200 layered input model with a single receiver takes only 0.34 s, while it increases to only 0.53 s for the data from four receivers at a same depth. For the case of four receivers at different depths, the inversion iteration runtime increases to 1.3 s. Modeling the data with an irregular loop and an equal-area square loop indicates that the effect of the loop geometry is significant at early times and vanishes gradually along the diffusion of TEM field. For a stratified earth, inversion of data from more than one receiver is useful in noise reducing to get a more credible layered earth. However, for a resistive layer shielded below a conductive layer, increasing the number of receivers on the ground does not have significant improvement in recovering the resistive layer. Even with a down-hole TEM sounding, the shielded resistive layer cannot be recovered if all receivers are above the shielded resistive layer. However, our modeling demonstrates remarkable improvement in detecting the resistive layer with receivers in or under this layer.

  17. A simulation based method to assess inversion algorithms for transverse relaxation data

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong

    2008-04-01

    NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.

  18. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  19. Inverse consistent non-rigid image registration based on robust point set matching

    PubMed Central

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889

  20. Joint inversion of multiple geophysical and petrophysical data using generalized fuzzy clustering algorithms

    NASA Astrophysics Data System (ADS)

    Sun, Jiajia; Li, Yaoguo

    2017-02-01

    Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.

  1. Tomographic reconstruction of atmospheric turbulence with the use of time-dependent stochastic inversion.

    PubMed

    Vecherin, Sergey N; Ostashev, Vladimir E; Ziemann, A; Wilson, D Keith; Arnold, K; Barth, M

    2007-09-01

    Acoustic travel-time tomography allows one to reconstruct temperature and wind velocity fields in the atmosphere. In a recently published paper [S. Vecherin et al., J. Acoust. Soc. Am. 119, 2579 (2006)], a time-dependent stochastic inversion (TDSI) was developed for the reconstruction of these fields from travel times of sound propagation between sources and receivers in a tomography array. TDSI accounts for the correlation of temperature and wind velocity fluctuations both in space and time and therefore yields more accurate reconstruction of these fields in comparison with algebraic techniques and regular stochastic inversion. To use TDSI, one needs to estimate spatial-temporal covariance functions of temperature and wind velocity fluctuations. In this paper, these spatial-temporal covariance functions are derived for locally frozen turbulence which is a more general concept than a widely used hypothesis of frozen turbulence. The developed theory is applied to reconstruction of temperature and wind velocity fields in the acoustic tomography experiment carried out by University of Leipzig, Germany. The reconstructed temperature and velocity fields are presented and errors in reconstruction of these fields are studied.

  2. A comparative study of controlled random search algorithms with application to inverse aerofoil design

    NASA Astrophysics Data System (ADS)

    Manzanares-Filho, N.; Albuquerque, R. B. F.; Sousa, B. S.; Santos, L. G. C.

    2018-06-01

    This article presents a comparative study of some versions of the controlled random search algorithm (CRSA) in global optimization problems. The basic CRSA, originally proposed by Price in 1977 and improved by Ali et al. in 1997, is taken as a starting point. Then, some new modifications are proposed to improve the efficiency and reliability of this global optimization technique. The performance of the algorithms is assessed using traditional benchmark test problems commonly invoked in the literature. This comparative study points out the key features of the modified algorithm. Finally, a comparison is also made in a practical engineering application, namely the inverse aerofoil shape design.

  3. Assessing performance of flaw characterization methods through uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  4. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE PAGES

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-27

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  5. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  6. Crustal Structure of the PARANÁ Basin from Ambient Noise Tomography

    NASA Astrophysics Data System (ADS)

    Collaço, B.; Assumpcao, M.; Rosa, M. L.; Sanchez, G.

    2013-12-01

    Previous surface-wave tomography in South America (SA) (e.g., Feng et al., 2004; 2007) mapped the main large-scale features of the continent, such as the high lithospheric velocities in cratonic areas and low velocities in the Patagonian province. However, more detailed features such as the Paraná Basin, have not been mapped with good resolution because of poor path coverage, i.e. classic surface- wave tomography has low resolution in low-seismicity areas, like Brazil and the Eastern Argentina. Crustal structure in Southern Brazil is poorly known. Most paths used by Feng et al. (2007) in this region are roughly parallel, which prevents good spatial resolution in tomographic inversions. This work is part of a major project that will increase knowledge of crustal structure in Southern Brazil and Eastern Argentina and is being carried out by IAG-USP (Brazil) in collaboration with UNLP and INPRES (Argentina). To improve resolution for the Paraná Basin we used inter-station dispersion curves derived from correlation of ambient noise for new stations deployed with the implementation of the Brazilian Seismic Network (Pirchiner et al. 2011). This technique, known as ambient noise tomography (ANT), was first applied by Shapiro et al. (2005) and is now expanding rapidly, especially in areas with high density of seismic stations (e.g. Bensen et al. 2007, Lin et al. 2008, Moschetti et al. 2010). ANT is a well-established method to estimate short period (< 20s) and intermediate periods (20 - 50s) surface wave speeds both in regional or continental scales (Lin et al. 2008). ANT data processing in this work was similar to the one described by Bensen et al. 2007, in four major steps with addition of a data inversion step. Group velocities between pairs of stations were derived from correlation of two years of ambient noise in the period range 5 to 60 s. The dispersion curves measurements were made using a modified version of PGSWMFA (PGplot Surface Wave Multiple Filter Analysis) code, designed by Chuck Ammon (St. Louis University) and successfully applied by Pasyanos et al. (2001). Our modified version is no more event based and is working now with station pairs. For the tomographic group velocities maps, we used the conjugate gradient method with 2nd derivative smoothing applied by Pasyanos et al. 2001. The group velocities maps were generated with one degree grid. For the tomographic inversion, we also add data derived from traditional dispersion measurements for earthquakes in SA. The velocity maps obtained for periods of 10 to 100s correspond generally well with data from previous studies (Feng et al, 2007), validating the use of ANT and contributing to increase resolution of tomography data in SA. The inversion maps obtained with 2nd derivative smoothing are more unstable at boundary zones for the inversion of sediments and crustal thickness. It can be explained by the smoothness factor, which is not reduced at expected discontinuities such as ocean/continent boundaries. As the steps of data processing are well defined and independent, as new stations are deployed with the progress of the Brasis Project (Pirchiner et al. 2011) new paths will be added to the initial database, increasing the resolution and reliability of the results. This work is funded by Petrobras with additional support from CNPq and FAPESP.

  7. 3-D Magnetotelluric Forward Modeling And Inversion Incorporating Topography By Using Vector Finite-Element Method Combined With Divergence Corrections Based On The Magnetic Field (VFEH++)

    NASA Astrophysics Data System (ADS)

    Shi, X.; Utada, H.; Jiaying, W.

    2009-12-01

    The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.

  8. Plenoptic projection fluorescence tomography.

    PubMed

    Iglesias, Ignacio; Ripoll, Jorge

    2014-09-22

    A new method to obtain the three-dimensional localization of fluorochrome distributions in micrometric samples is presented. It uses a microlens array coupled to the image port of a standard microscope to obtain tomographic data by a filtered back-projection algorithm. Scanning of the microlens array is proposed to obtain a dense data set for reconstruction. Simulation and experimental results are shown and the implications of this approach in fast 3D imaging are discussed.

  9. Chest CT window settings with multiscale adaptive histogram equalization: pilot study.

    PubMed

    Fayad, Laura M; Jin, Yinpeng; Laine, Andrew F; Berkmen, Yahya M; Pearson, Gregory D; Freedman, Benjamin; Van Heertum, Ronald

    2002-06-01

    Multiscale adaptive histogram equalization (MAHE), a wavelet-based algorithm, was investigated as a method of automatic simultaneous display of the full dynamic contrast range of a computed tomographic image. Interpretation times were significantly lower for MAHE-enhanced images compared with those for conventionally displayed images. Diagnostic accuracy, however, was insufficient in this pilot study to allow recommendation of MAHE as a replacement for conventional window display.

  10. Two methods of Haustral fold detection from computed tomographic virtual colonoscopy images

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ananda S.; Tan, Sovira; Yao, Jianhua; Linguraru, Marius G.; Summers, Ronald M.

    2009-02-01

    Virtual colonoscopy (VC) has gained popularity as a new colon diagnostic method over the last decade. VC is a new, less invasive alternative to the usually practiced optical colonoscopy for colorectal polyp and cancer screening, the second major cause of cancer related deaths in industrial nations. Haustral (colonic) folds serve as important landmarks for virtual endoscopic navigation in the existing computer-aided-diagnosis (CAD) system. In this paper, we propose and compare two different methods of haustral fold detection from volumetric computed tomographic virtual colonoscopy images. The colon lumen is segmented from the input using modified region growing and fuzzy connectedness. The first method for fold detection uses a level set that evolves on a mesh representation of the colon surface. The colon surface is obtained from the segmented colon lumen using the Marching Cubes algorithm. The second method for fold detection, based on a combination of heat diffusion and fuzzy c-means algorithm, is employed on the segmented colon volume. Folds obtained on the colon volume using this method are then transferred to the corresponding colon surface. After experimentation with different datasets, results are found to be promising. The results also demonstrate that the first method has a tendency of slight under-segmentation while the second method tends to slightly over-segment the folds.

  11. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  12. The preliminary results: Internal seismic velocity structure imaging beneath Mount Lokon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Firmansyah, Rizky, E-mail: rizkyfirmansyah@hotmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id; Kristianto, E-mail: kris@vsi.esdm.go.id

    2015-04-24

    Historical records that before the 17{sup th} century, Mount Lokon had been dormant for approximately 400 years. In the years between 1350 and 1400, eruption ever recorded in Empung, came from Mount Lokon’s central crater. Subsequently, in 1750 to 1800, Mount Lokon continued to erupt again and caused soil damage and fall victim. After 1949, Mount Lokon dramatically increased in its frequency: the eruption interval varies between 1 – 5 years, with an average interval of 3 years and a rest interval ranged from 8 – 64 years. Then, on June 26{sup th}, 2011, standby alert set by the Centermore » for Volcanology and Geological Hazard Mitigation. Peak activity happened on July 4{sup th}, 2011 that Mount Lokon erupted continuously until August 28{sup th}, 2011. In this study, we carefully analyzed micro-earthquakes waveform and determined hypocenter location of those events. We then conducted travel time seismic tomographic inversion using SIMULPS12 method to detemine Vp, Vs and Vp/Vs ratio structures beneath Lokon volcano in order to enhance our subsurface geological structure. During the tomographic inversion, we started from 1-D seismic velocities model obtained from VELEST33 method. Our preliminary results show low Vp, low Vs, and high Vp/Vs are observed beneath Mount Lokon-Empung which are may be associated with weak zone or hot material zones. However, in this study we used few station for recording of micro-earthquake events. So, we suggest in the future tomography study, the adding of some seismometers in order to improve ray coverage in the region is profoundly justified.« less

  13. P-wave and surface wave survey for permafrost analysis in alpine regions

    NASA Astrophysics Data System (ADS)

    Godio, A.; Socco, L. V.; Garofalo, F.; Arato, A.; Théodule, A.

    2012-04-01

    In various high mountain environments the estimate of mechanical properties of slope and sediments are relevant for the link of the geo-mechanical properties with the climate change effects. Two different locations were selected to perform seismic and georadar surveying, the Tsanteleina glacier (Gran Paradiso) and the Blue Lake in Val d'Ayas in the massif of Monterosa. The analysis of the seismic and GPR lines allowed to characterize the silty soil (top layer) and underlying bedrock. We applied seismic survey in time lapse mode to check the presence of "active" layer and estimate the mechanical properties of the moraines material and their sensitivity to the permafrost changes. Mechanical properties of sediments and moraines in glacial areas are related to the grain-size, the compaction of the material subjected to the past glacial activity, the presence of frozen materials and the reactivity of the permafrost to the climate changes. The test site of Tsanteleina has been equipped with sensors to monitor the temperature of soil and air and with time domain reflectometry to estimate the soil moisture and the frozen and thawing cycle of the uppermost material. Seismic reflections from the top of the permafrost layer are difficult to identify as they are embedded in the source-generated noise. Therefore we estimate seismic velocities from the analysis of traveltime refraction tomography and the analysis of surface wave. This approach provides information on compressional and shear waves using a single acquisition layout and a hammer acts as source. This reduces the acquisition time in complex logistical condition especially in winter period. The seismic survey was performed using 48 vertical geophones with 2 m spacing. The survey has been repeated in two different periods: summer 2011 and winter 2011. Common offset reflection lines with a 200 MHz GPR system (in summer) permitted to investigate the sediments and obtain information on the subsoil layering. The processing of seismic data involved the tomographic interpretation of traveltime P-wave first arrivals by considering the continuous refraction of the ray-paths. Several surface-wave dispersion curves were extracted in f-k domain along the seismic line and then inverted through a laterally constrained inversion algorithm to obtain a pseudo-2D section of S-wave velocity. Georadar investigation (about 2 km of georadar lines in the first site) confirmed the presence both of fine and coarse sediments in the uppermost layer; the seismic data allowed the moraines to be characterized down to 20-25 meters of depth. At the elevation of 2700 m asl, we observed a general decrease of the P-wave traveltimes collected in November, when the near surface layer was in frozen condition, respect to the data acquired in June. The frozen layer is responsible of the inversion of P-wave velocity with depth; the higher velocity layer (frozen) cannot be detected in the tomographic interpretation of refraction tomographic of the P-wave arrivals. Compressional wave velocity ranges from 700 m/s on the uppermost part, to 2000-2500 m/s in the internal part of the sediments reaching values higher than 5000 m/s at depth about 20 m. The analysis of surface wave permitted to estimate a slight increase from summer to winter of the S-wave velocity, in the depth range between 0 to 5 m.

  14. Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale

    2012-10-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  15. Tomographic reconstruction of tokamak plasma light emission from single image using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.

    2012-01-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  16. Airborne Tomographic Swath Ice Sounding Processing System

    NASA Technical Reports Server (NTRS)

    Wu, Xiaoqing; Rodriquez, Ernesto; Freeman, Anthony; Jezek, Ken

    2013-01-01

    Glaciers and ice sheets modulate global sea level by storing water deposited as snow on the surface, and discharging water back into the ocean through melting. Their physical state can be characterized in terms of their mass balance and dynamics. To estimate the current ice mass balance, and to predict future changes in the motion of the Greenland and Antarctic ice sheets, it is necessary to know the ice sheet thickness and the physical conditions of the ice sheet surface and bed. This information is required at fine resolution and over extensive portions of the ice sheets. A tomographic algorithm has been developed to take raw data collected by a multiple-channel synthetic aperture sounding radar system over a polar ice sheet and convert those data into two-dimensional (2D) ice thickness measurements. Prior to this work, conventional processing techniques only provided one-dimensional ice thickness measurements along profiles.

  17. A resolution-enhancing image reconstruction method for few-view differential phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Guan, Huifeng; Anastasio, Mark A.

    2017-03-01

    It is well-known that properly designed image reconstruction methods can facilitate reductions in imaging doses and data-acquisition times in tomographic imaging. The ability to do so is particularly important for emerging modalities such as differential X-ray phase-contrast tomography (D-XPCT), which are currently limited by these factors. An important application of D-XPCT is high-resolution imaging of biomedical samples. However, reconstructing high-resolution images from few-view tomographic measurements remains a challenging task. In this work, a two-step sub-space reconstruction strategy is proposed and investigated for use in few-view D-XPCT image reconstruction. It is demonstrated that the resulting iterative algorithm can mitigate the high-frequency information loss caused by data incompleteness and produce images that have better preserved high spatial frequency content than those produced by use of a conventional penalized least squares (PLS) estimator.

  18. Tomographic reconstruction of an aerosol plume using passive multiangle observations from the MISR satellite instrument

    NASA Astrophysics Data System (ADS)

    Garay, Michael J.; Davis, Anthony B.; Diner, David J.

    2016-12-01

    We present initial results using computed tomography to reconstruct the three-dimensional structure of an aerosol plume from passive observations made by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. MISR views the Earth from nine different angles at four visible and near-infrared wavelengths. Adopting the 672 nm channel, we treat each view as an independent measure of aerosol optical thickness along the line of sight at 1.1 km resolution. A smoke plume over dark water is selected as it provides a more tractable lower boundary condition for the retrieval. A tomographic algorithm is used to reconstruct the horizontal and vertical aerosol extinction field for one along-track slice from the path of all camera rays passing through a regular grid. The results compare well with ground-based lidar observations from a nearby Micropulse Lidar Network site.

  19. Recursive flexible multibody system dynamics using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1992-01-01

    This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.

  20. Assessment of crustal velocity models using seismic refraction and reflection tomography

    NASA Astrophysics Data System (ADS)

    Zelt, Colin A.; Sain, Kalachand; Naumenko, Julia V.; Sawyer, Dale S.

    2003-06-01

    Two tomographic methods for assessing velocity models obtained from wide-angle seismic traveltime data are presented through four case studies. The modelling/inversion of wide-angle traveltimes usually involves some aspects that are quite subjective. For example: (1) identifying and including later phases that are often difficult to pick within the seismic coda, (2) assigning specific layers to arrivals, (3) incorporating pre-conceived structure not specifically required by the data and (4) selecting a model parametrization. These steps are applied to maximize model constraint and minimize model non-uniqueness. However, these steps may cause the overall approach to appear ad hoc, and thereby diminish the credibility of the final model. The effect of these subjective choices can largely be addressed by estimating the minimum model structure required by the least subjective portion of the wide-angle data set: the first-arrival times. For data sets with Moho reflections, the tomographic velocity model can be used to invert the PmP times for a minimum-structure Moho. In this way, crustal velocity and Moho models can be obtained that require the least amount of subjective input, and the model structure that is required by the wide-angle data with a high degree of certainty can be differentiated from structure that is merely consistent with the data. The tomographic models are not intended to supersede the preferred models, since the latter model is typically better resolved and more interpretable. This form of tomographic assessment is intended to lend credibility to model features common to the tomographic and preferred models. Four case studies are presented in which a preferred model was derived using one or more of the subjective steps described above. This was followed by conventional first-arrival and reflection traveltime tomography using a finely gridded model parametrization to derive smooth, minimum-structure models. The case studies are from the SE Canadian Cordillera across the Rocky Mountain Trench, central India across the Narmada-Son lineament, the Iberia margin across the Galicia Bank, and the central Chilean margin across the Valparaiso Basin and a subducting seamount. These case studies span the range of modern wide-angle experiments and data sets in terms of shot-receiver spacing, marine and land acquisition, lateral heterogeneity of the study area, and availability of wide-angle reflections and coincident near-vertical reflection data. The results are surprising given the amount of structure in the smooth, tomographically derived models that is consistent with the more subjectively derived models. The results show that exploiting the complementary nature of the subjective and tomographic approaches is an effective strategy for the analysis of wide-angle traveltime data.

  1. Stochastic inversion of ocean color data using the cross-entropy method.

    PubMed

    Salama, Mhd Suhyb; Shen, Fang

    2010-01-18

    Improving the inversion of ocean color data is an ever continuing effort to increase the accuracy of derived inherent optical properties. In this paper we present a stochastic inversion algorithm to derive inherent optical properties from ocean color, ship and space borne data. The inversion algorithm is based on the cross-entropy method where sets of inherent optical properties are generated and converged to the optimal set using iterative process. The algorithm is validated against four data sets: simulated, noisy simulated in-situ measured and satellite match-up data sets. Statistical analysis of validation results is based on model-II regression using five goodness-of-fit indicators; only R2 and root mean square of error (RMSE) are mentioned hereafter. Accurate values of total absorption coefficient are derived with R2 > 0.91 and RMSE, of log transformed data, less than 0.55. Reliable values of the total backscattering coefficient are also obtained with R2 > 0.7 (after removing outliers) and RMSE < 0.37. The developed algorithm has the ability to derive reliable results from noisy data with R2 above 0.96 for the total absorption and above 0.84 for the backscattering coefficients. The algorithm is self contained and easy to implement and modify to derive the variability of chlorophyll-a absorption that may correspond to different phytoplankton species. It gives consistently accurate results and is therefore worth considering for ocean color global products.

  2. Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications

    DTIC Science & Technology

    2005-04-01

    coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.

  3. Visual computed tomographic scoring of emphysema and its correlation with its diagnostic electrocardiographic sign: the frontal P vector.

    PubMed

    Chhabra, Lovely; Sareen, Pooja; Gandagule, Amit; Spodick, David H

    2012-03-01

    Verticalization of the frontal P vector in patients older than 45 years is virtually diagnostic of pulmonary emphysema (sensitivity, 96%; specificity, 87%). We investigated the correlation of P vector and the computed tomographic visual score of emphysema (VSE) in patients with established diagnosis of chronic obstructive pulmonary disease/emphysema. High-resolution computed tomographic scans of 26 patients with emphysema (age, >45 years) were reviewed to assess the type and extent of emphysema using the subjective visual scoring. Electrocardiograms were independently reviewed to determine the frontal P vector. The P vector and VSE were compared for statistical correlation. Both P vector and VSE were also directly compared with the forced expiratory volume at 1 second. The VSE and the orientation of the P vector (ÂP) had an overall significant positive correlation (r = +0.68; P = .0001) in all patients, but the correlation was very strong in patients with predominant lower-lobe emphysema (r = +0.88; P = .0004). Forced expiratory volume at 1 second and ÂP had almost a linear inverse correlation in predominant lower-lobe emphysema (r = -0.92; P < .0001). Orientation of the P vector positively correlates with visually scored emphysema. Both ÂP and VSE are strong reflectors of qualitative lung function in patients with predominant lower-lobe emphysema. A combination of more vertical ÂP and predominant lower-lobe emphysema reflects severe obstructive lung dysfunction. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. A New Comprehensive Model for Crustal and Upper Mantle Structure of the European Plate

    NASA Astrophysics Data System (ADS)

    Morelli, A.; Danecek, P.; Molinari, I.; Postpischl, L.; Schivardi, R.; Serretti, P.; Tondi, M. R.

    2009-12-01

    We present a new comprehensive model of crustal and upper mantle structure of the whole European Plate — from the North Atlantic ridge to Urals, and from North Africa to the North Pole — describing seismic speeds (P and S) and density. Our description of crustal structure merges information from previous studies: large-scale compilations, seismic prospection, receiver functions, inversion of surface wave dispersion measurements and Green functions from noise correlation. We use a simple description of crustal structure, with laterally-varying sediment and cristalline layers thickness and seismic parameters. Most original information refers to P-wave speed, from which we derive S speed and density from scaling relations. This a priori crustal model by itself improves the overall fit to observed Bouguer anomaly maps, as derived from GRACE satellite data, over CRUST2.0. The new crustal model is then used as a constraint in the inversion for mantle shear wave speed, based on fitting Love and Rayleigh surface wave dispersion. In the inversion for transversely isotropic mantle structure, we use group speed measurements made on European event-to-station paths, and use a global a priori model (S20RTS) to ensure fair rendition of earth structure at depth and in border areas with little coverage from our data. The new mantle model sensibly improves over global S models in the imaging of shallow asthenospheric (slow) anomalies beneath the Alpine mobile belt, and fast lithospheric signatures under the two main Mediterranean subduction systems (Aegean and Tyrrhenian). We map compressional wave speed inverting ISC travel times (reprocessed by Engdahl et al.) with a non linear inversion scheme making use of finite-difference travel time calculation. The inversion is based on an a priori model obtained by scaling the 3D mantle S-wave speed to P. The new model substantially confirms images of descending lithospheric slabs and back-arc shallow asthenospheric regions, shown in other more local high-resolution tomographic studies, but covers the whole range of the European Plate. We also obtain three-dimensional mantle density structure by inversion of GRACE Bouguer anomalies locally adjusting density and the scaling relation between seismic wave speeds and density. We validate the new comprehensive model through comparison of recorded seismograms with numerical simulations based on SPECFEM3D. This work is a contribution towards the definition of a reference earth model for Europe. To this extent, in order to improve model dissemination and comparison, we propose the adoption of a common exchange format for tomographic earth models based on JSON, a lightweight data-interchange format supported by most high-level programming languages. We provide tools for manipulating and visualising models, described in this standard format, in Google Earth and GEON IDV.

  5. Cyberinfrastructure for the Unified Study of Earth Structure and Earthquake Sources in Complex Geologic Environments

    NASA Astrophysics Data System (ADS)

    Zhao, L.; Chen, P.; Jordan, T. H.; Olsen, K. B.; Maechling, P.; Faerman, M.

    2004-12-01

    The Southern California Earthquake Center (SCEC) is developing a Community Modeling Environment (CME) to facilitate the computational pathways of physics-based seismic hazard analysis (Maechling et al., this meeting). Major goals are to facilitate the forward modeling of seismic wavefields in complex geologic environments, including the strong ground motions that cause earthquake damage, and the inversion of observed waveform data for improved models of Earth structure and fault rupture. Here we report on a unified approach to these coupled inverse problems that is based on the ability to generate and manipulate wavefields in densely gridded 3D Earth models. A main element of this approach is a database of receiver Green tensors (RGT) for the seismic stations, which comprises all of the spatial-temporal displacement fields produced by the three orthogonal unit impulsive point forces acting at each of the station locations. Once the RGT database is established, synthetic seismograms for any earthquake can be simply calculated by extracting a small, source-centered volume of the RGT from the database and applying the reciprocity principle. The partial derivatives needed for point- and finite-source inversions can be generated in the same way. Moreover, the RGT database can be employed in full-wave tomographic inversions launched from a 3D starting model, because the sensitivity (Fréchet) kernels for travel-time and amplitude anomalies observed at seismic stations in the database can be computed by convolving the earthquake-induced displacement field with the station RGTs. We illustrate all elements of this unified analysis with an RGT database for 33 stations of the California Integrated Seismic Network in and around the Los Angeles Basin, which we computed for the 3D SCEC Community Velocity Model (SCEC CVM3.0) using a fourth-order staggered-grid finite-difference code. For a spatial grid spacing of 200 m and a time resolution of 10 ms, the calculations took ~19,000 node-hours on the Linux cluster at USC's High-Performance Computing Center. The 33-station database with a volume of ~23.5 TB was archived in the SCEC digital library at the San Diego Supercomputer Center using the Storage Resource Broker (SRB). From a laptop, anyone with access to this SRB collection can compute synthetic seismograms for an arbitrary source in the CVM in a matter of minutes. Efficient approaches have been implemented to use this RGT database in the inversions of waveforms for centroid and finite moment tensors and tomographic inversions to improve the CVM. Our experience with these large problems suggests areas where the cyberinfrastructure currently available for geoscience computation needs to be improved.

  6. Shear wave velocity structure in North America from large-scale waveform inversions of surface waves

    USGS Publications Warehouse

    Alsina, D.; Woodward, R.L.; Snieder, R.K.

    1996-01-01

    A two-step nonlinear and linear inversion is carried out to map the lateral heterogeneity beneath North America using surface wave data. The lateral resolution for most areas of the model is of the order of several hundred kilometers. The most obvious feature in the tomographic images is the rapid transition between low velocities in the technically active region west of the Rocky Mountains and high velocities in the stable central and eastern shield of North America. The model also reveals smaller-scale heterogeneous velocity structures. A high-velocity anomaly is imaged beneath the state of Washington that could be explained as the subducting Juan de Fuca plate beneath the Cascades. A large low-velocity structure extends along the coast from the Mendocino to the Rivera triple junction and to the continental interior across the southwestern United States and northwestern Mexico. Its shape changes notably with depth. This anomaly largely coincides with the part of the margin where no lithosphere is consumed since the subduction has been replaced by a transform fault. Evidence for a discontinuous subduction of the Cocos plate along the Middle American Trench is found. In central Mexico a transition is visible from low velocities across the Trans-Mexican Volcanic Belt (TMVB) to high velocities beneath the Yucatan Peninsula. Two elongated low-velocity anomalies beneath the Yellowstone Plateau and the eastern Snake River Plain volcanic system and beneath central Mexico and the TMVB seem to be associated with magmatism and partial melting. Another low-velocity feature is seen at depths of approximately 200 km beneath Florida and the Atlantic Coastal Plain. The inversion technique used is based on a linear surface wave scattering theory, which gives tomographic images of the relative phase velocity perturbations in four period bands ranging from 40 to 150 s. In order to find a smooth reference model a nonlinear inversion based on ray theory is first performed. After correcting for the crustal thickness the phase velocity perturbations obtained from the subsequent linear waveform inversion for the different period bands are converted to a three-layer model of S velocity perturbations (layer 1, 25-100 km; layer 2, 100-200 km) layer 3, 200-300 km). We have applied this method on 275 high-quality Rayleigh waves recorded by a variety of instruments in North America (IRIS/USGS, IRIS/IDA, TERRAscope, RSTN). Sensitivity tests indicate that the lateral resolution is especially good in the densely sampled western continental United States, Mexico, and the Gulf of Mexico.

  7. Tomographic PIV: particles versus blobs

    NASA Astrophysics Data System (ADS)

    Champagnat, Frédéric; Cornic, Philippe; Cheminet, Adam; Leclaire, Benjamin; Le Besnerais, Guy; Plyer, Aurélien

    2014-08-01

    We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s point spread function (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels.

  8. Steady shape analysis of tomographic pumping tests for characterization of aquifer heterogeneities

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Zhan, Xiaoyong; Butler, James J.; Zheng, Li

    2002-01-01

    Hydraulic tomography, a procedure involving the performance of a suite of pumping tests in a tomographic format, provides information about variations in hydraulic conductivity at a level of detail not obtainable with traditional well tests. However, analysis of transient data from such a suite of pumping tests represents a substantial computational burden. Although steady state responses can be analyzed to reduce this computational burden significantly, the time required to reach steady state will often be too long for practical applications of the tomography concept. In addition, uncertainty regarding the mechanisms driving the system to steady state can propagate to adversely impact the resulting hydraulic conductivity estimates. These disadvantages of a steady state analysis can be overcome by exploiting the simplifications possible under the steady shape flow regime. At steady shape conditions, drawdown varies with time but the hydraulic gradient does not. Thus transient data can be analyzed with the computational efficiency of a steady state model. In this study, we demonstrate the value of the steady shape concept for inversion of hydraulic tomography data and investigate its robustness with respect to improperly specified boundary conditions.

  9. An efficient and accurate approach to MTE-MART for time-resolved tomographic PIV

    NASA Astrophysics Data System (ADS)

    Lynch, K. P.; Scarano, F.

    2015-03-01

    The motion-tracking-enhanced MART (MTE-MART; Novara et al. in Meas Sci Technol 21:035401, 2010) has demonstrated the potential to increase the accuracy of tomographic PIV by the combined use of a short sequence of non-simultaneous recordings. A clear bottleneck of the MTE-MART technique has been its computational cost. For large datasets comprising time-resolved sequences, MTE-MART becomes unaffordable and has been barely applied even for the analysis of densely seeded tomographic PIV datasets. A novel implementation is proposed for tomographic PIV image sequences, which strongly reduces the computational burden of MTE-MART, possibly below that of regular MART. The method is a sequential algorithm that produces a time-marching estimation of the object intensity field based on an enhanced guess, which is built upon the object reconstructed at the previous time instant. As the method becomes effective after a number of snapshots (typically 5-10), the sequential MTE-MART (SMTE) is most suited for time-resolved sequences. The computational cost reduction due to SMTE simply stems from the fewer MART iterations required for each time instant. Moreover, the method yields superior reconstruction quality and higher velocity field measurement precision when compared with both MART and MTE-MART. The working principle is assessed in terms of computational effort, reconstruction quality and velocity field accuracy with both synthetic time-resolved tomographic images of a turbulent boundary layer and two experimental databases documented in the literature. The first is the time-resolved data of flow past an airfoil trailing edge used in the study of Novara and Scarano (Exp Fluids 52:1027-1041, 2012); the second is a swirling jet in a water flow. In both cases, the effective elimination of ghost particles is demonstrated in number and intensity within a short temporal transient of 5-10 frames, depending on the seeding density. The increased value of the velocity space-time correlation coefficient demonstrates the increased velocity field accuracy of SMTE compared with MART.

  10. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  11. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  12. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  13. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval.

    ERIC Educational Resources Information Center

    Losada, David E.; Barreiro, Alvaro

    2003-01-01

    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  14. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  15. Comparison among GPR measurements and ultrasonic tomographies with different inversion strategies applied to the basement of an ancient egyptian sculpture.

    NASA Astrophysics Data System (ADS)

    Sambuelli, Luigi; Bohm, Gualtiero; Capizzi, Patrizia; Cardarelli, Ettore; Cosentino, Pietro; D'Onofrio, Laurent; Marchisio, Mario

    2010-05-01

    By the late 2008 one of the most important pieces of the "Museo delle Antichità Egizie" in Turin, the sculpture of the Pharaoh with god Amun, was planned to be one of the masterpieces of a travelling exhibition in Japan. The "Fondazione Museo delle Antichità Egizie di Torino", managing the museum, was concerned with the integrity of the basement of the statue which actually presents visible signs of restorations dating back to the early IXX century. The questions put by the museum managers were to estimate the internal extension of some visible fractures, to search for unknown internal ones and to provide information about the overall mechanical strength of the basement. In order to tackle the first and second questions a GPR reflection survey of the basement along three sides was performed and the results were assembled in a 3D rendering. As far as the third question is concerned, two parallel, horizontal ultrasonic 2D tomographies across the basement were made with a source-receiver layout able to acquire, for each section, 723 ultrasonic signals correspondent to different transmitter and receiver positions. The ultrasonic tomographic data were inverted using different software based upon different algorithms. The obtained velocity images were then compared with the GPR results and with the visible joints on the basement. A critical analysis of the comparisons is finally presented.

  16. Seismic tomography of the area of the 2010 Beni-Ilmane earthquake sequence, north-central Algeria.

    PubMed

    Abacha, Issam; Koulakov, Ivan; Semmane, Fethi; Yelles-Chaouche, Abd Karim

    2014-01-01

    The region of Beni-Ilmane (District of M'sila, north-central Algeria) was the site of an earthquake sequence that started on 14 May 2010. This sequence, which lasted several months, was triggered by conjugate E-W reverse and N-S dextral faulting. To image the crustal structure of these active faults, we used a set of 1406 well located aftershocks events and applied the local tomography software (LOTOS) algorithm, which includes absolute source location, optimization of the initial 1D velocity model, and iterative tomographic inversion for 3D seismic P- and S-wave velocities (and the Vp/Vs ratio), and source parameters. The patterns of P-wave low-velocity anomalies correspond to the alignments of faults determined from geological evidence, and the P-wave high-velocity anomalies may represent rigid blocks of the upper crust that are not deformed by regional stresses. The S-wave low-velocity anomalies coincide with the aftershock area, where relatively high values of Vp/Vs ratio (1.78) are observed compared with values in the surrounding areas (1.62-1.66). These high values may indicate high fluid contents in the aftershock area. These fluids could have been released from deeper levels by fault movements during earthquakes and migrated rapidly upwards. This hypothesis is supported by vertical sections across the study area show that the major Vp/Vs anomalies are located above the seismicity clusters.

  17. Receiver Function Analyses of Uturuncu Volcano, Bolivia and Lastarria/Cordon Del Azufre Volcanoes, Chile

    NASA Astrophysics Data System (ADS)

    Mcfarlin, H. L.; Christensen, D. H.; Thompson, G.; McNutt, S. R.; Ryan, J. C.; Ward, K. M.; Zandt, G.; West, M. E.

    2014-12-01

    Uturuncu Volcano and a zone between Lastarria and Cordon del Azufre Volcanoes (also calledLazufre), have seen much attention lately because of significant and rapid inflation of one to twocentimeters per year over large areas. Uturuncu is located near the Bolivian-Chilean border, andLazufre is located near the Chilean-Argentine border. The PLUTONS Project deployed 28broadband seismic stations around Uturuncu Volcano, from April 2009 to Octobor 2012, and alsodeployed 9 stations around Lastarria and Cordon del Azufre volcanoes, from November, 2011 toApril 2013. Teleseismic receiver functions were generated using the time-domain iterativedeconvolution algorithm of Ligorria and Ammon (1999) for each volcanic area. These receiverfunctions were used to better constrain the depths of magma bodies under Uturuncu and Lazufre,as well as the ultra low velocity layer within the Altiplano-Puna Magma Body (APMB). Thelow velocity zone under Uturuncu is shown to have a top around 10 km depth b.s.l and isgenerally around 20 km thick with regional variations. Tomographic inversion shows a well resolved,near vertical, high Vp/Vs anomaly directly beneath Uturuncu that correlates well with adisruption in the receiver function results; which is inferred to be a magmatic intrusion causing alocal thickening of the APMB. Preliminary results at Lazufre show the top of a low velocityzone around 5-10 km b.s.l with a thickness of 15-30 km.

  18. Dependence of image quality on image operator and noise for optical diffusion tomography

    NASA Astrophysics Data System (ADS)

    Chang, Jenghwa; Graber, Harry L.; Barbour, Randall L.

    1998-04-01

    By applying linear perturbation theory to the radiation transport equation, the inverse problem of optical diffusion tomography can be reduced to a set of linear equations, W(mu) equals R, where W is the weight function, (mu) are the cross- section perturbations to be imaged, and R is the detector readings perturbations. We have studied the dependence of image quality on added systematic error and/or random noise in W and R. Tomographic data were collected from cylindrical phantoms, with and without added inclusions, using Monte Carlo methods. Image reconstruction was accomplished using a constrained conjugate gradient descent method. Result show that accurate images containing few artifacts are obtained when W is derived from a reference states whose optical thickness matches that of the unknown teste medium. Comparable image quality was also obtained for unmatched W, but the location of the target becomes more inaccurate as the mismatch increases. Results of the noise study show that image quality is much more sensitive to noise in W than in R, and the impact of noise increase with the number of iterations. Images reconstructed after pure noise was substituted for R consistently contain large peaks clustered about the cylinder axis, which was an initially unexpected structure. In other words, random input produces a non- random output. This finding suggests that algorithms sensitive to the evolution of this feature could be developed to suppress noise effects.

  19. Regional-Scale Differential Time Tomography Methods: Development and Application to the Sichuan, China, Dataset

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Thurber, C.; Wang, W.; Roecker, S. W.

    2008-12-01

    We extended our recent development of double-difference seismic tomography [Zhang and Thurber, BSSA, 2003] to the use of station-pair residual differences in addition to event-pair residual differences. Tomography using station- pair residual differences is somewhat akin to teleseismic tomography but with the sources contained within the model region. Synthetic tests show that the inversion using both event- and station-pair residual differences has advantages in terms of more accurately recovering higher-resolution structure in both the source and receiver regions. We used the Spherical-Earth Finite-Difference (SEFD) travel time calculation method in the tomographic system. The basic concept is the extension of a standard Cartesian FD travel time algorithm [Vidale, 1990] to the spherical case by developing a mesh in radius, co-latitude, and longitude, expressing the FD derivatives in a form appropriate to the spherical mesh, and constructing"stencil" to calculate extrapolated travel times. The SEFD travel time calculation method is more advantageous in dealing with heterogeneity and sphericity of the Earth than the simple Earth flattening transformation and the"sphere-in-a-bo" approach [Flanagan et al., 2007]. We applied this method to the Sichuan, China data set for the period of 2001 to 2004. The Vp, Vs and Vp/Vs models show that there is a clear contrast across the Longmenshan Fault, where the 2008 M8 Wenchuan earthquake initiated.

  20. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  1. 1D-VAR Retrieval Using Superchannels

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel; Larar, Allen; Smith, William L.; Schluessel, Peter; Mango, Stephen; SaintGermain, Karen

    2008-01-01

    Since modern ultra-spectral remote sensors have thousands of channels, it is difficult to include all of them in a 1D-var retrieval system. We will describe a physical inversion algorithm, which includes all available channels for the atmospheric temperature, moisture, cloud, and surface parameter retrievals. Both the forward model and the inversion algorithm compress the channel radiances into super channels. These super channels are obtained by projecting the radiance spectra onto a set of pre-calculated eigenvectors. The forward model provides both super channel properties and jacobian in EOF space directly. For ultra-spectral sensors such as Infrared Atmospheric Sounding Interferometer (IASI) and the NPOESS Airborne Sounder Testbed Interferometer (NAST), a compression ratio of more than 80 can be achieved, leading to a significant reduction in computations involved in an inversion process. Results will be shown applying the algorithm to real IASI and NAST data.

  2. Inverse transport calculations in optical imaging with subspace optimization algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less

  3. ERBE Geographic Scene and Monthly Snow Data

    NASA Technical Reports Server (NTRS)

    Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.

    1997-01-01

    The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.

  4. Transitionless driving on adiabatic search algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, Sangchul, E-mail: soh@qf.org.qa; Kais, Sabre, E-mail: kais@purdue.edu; Department of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907

    We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian,more » approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.« less

  5. An improved grey wolf optimizer algorithm for the inversion of geoelectrical data

    NASA Astrophysics Data System (ADS)

    Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui

    2018-05-01

    The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.

  6. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  7. Colorectal cancer screening with virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Ge, Yaorong; Vining, David J.; Ahn, David K.; Stelts, David R.

    1999-05-01

    Early detection and removal of colorectal polyps have been proven to reduce mortality from colorectal carcinoma (CRC), the second leading cause of cancer deaths in the United States. Unfortunately, traditional techniques for CRC examination (i.e., barium enema, sigmoidoscopy, and colonoscopy) are unsuitable for mass screening because of either low accuracy or poor public acceptance, costs, and risks. Virtual colonoscopy (VC) is a minimally invasive alternative that is based on tomographic scanning of the colon. After a patient's bowel is optimally cleansed and distended with gas, a fast tomographic scan, typically helical computed tomography (CT), of the abdomen is performed during a single breath-hold acquisition. Two-dimensional (2D) slices and three-dimensional (3D) rendered views of the colon lumen generated from the tomographic data are then examined for colorectal polyps. Recent clinical studies conducted at several institutions including ours have shown great potential for this technology to be an effective CRC screening tool. In this paper, we describe new methods to improve bowel preparation, colon lumen visualization, colon segmentation, and polyp detection. Our initial results show that VC with the new bowel preparation and imaging protocol is capable of achieving accuracy comparable to conventional colonoscopy and our new algorithms for image analysis contribute to increased accuracy and efficiency in VC examinations.

  8. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  9. Computer-aided interpretation approach for optical tomographic images

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.

    2010-11-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.

  10. An effective medium inversion algorithm for gas hydrate quantification and its application to laboratory and borehole measurements of gas hydrate-bearing sediments

    NASA Astrophysics Data System (ADS)

    Chand, Shyam; Minshull, Tim A.; Priest, Jeff A.; Best, Angus I.; Clayton, Christopher R. I.; Waite, William F.

    2006-08-01

    The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L-38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.

  11. An effective medium inversion algorithm for gas hydrate quantification and its application to laboratory and borehole measurements of gas hydrate-bearing sediments

    USGS Publications Warehouse

    Chand, S.; Minshull, T.A.; Priest, J.A.; Best, A.I.; Clayton, C.R.I.; Waite, W.F.

    2006-01-01

    The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L–38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.

  12. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  13. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  14. Full Seismic Waveform Tomography of the Japan region using Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Steptoe, Hamish; Fichtner, Andreas; Rickers, Florian; Trampert, Jeannot

    2013-04-01

    We present a full-waveform tomographic model of the Japan region based on spectral-element wave propagation, adjoint techniques and seismic data from dense station networks. This model is intended to further our understanding of both the complex regional tectonics and the finite rupture processes of large earthquakes. The shallow Earth structure of the Japan region has been the subject of considerable tomographic investigation. The islands of Japan exist in an area of significant plate complexity: subduction related to the Pacific and Philippine Sea plates is responsible for the majority of seismicity and volcanism of Japan, whilst smaller micro-plates in the region, including the Okhotsk, and Okinawa and Amur, part of the larger North America and Eurasia plates respectively, contribute significant local intricacy. In response to the need to monitor and understand the motion of these plates and their associated faults, numerous seismograph networks have been established, including the 768 station high-sensitivity Hi-net network, 84 station broadband F-net and the strong-motion seismograph networks K-net and KiK-net in Japan. We also include the 55 station BATS network of Taiwan. We use this exceptional coverage to construct a high-resolution model of the Japan region from the full-waveform inversion of over 15,000 individual component seismograms from 53 events that occurred between 1997 and 2012. We model these data using spectral-element simulations of seismic wave propagation at a regional scale over an area from 120°-150°E and 20°-50°N to a depth of around 500 km. We quantify differences between observed and synthetic waveforms using time-frequency misfits allowing us to separate both phase and amplitude measurements whilst exploiting the complete waveform at periods of 15-60 seconds. Fréchet kernels for these misfits are calculated via the adjoint method and subsequently used in an iterative non-linear conjugate-gradient optimization. Finally, we employ custom smoothing algorithms to remove the singularities of the Fréchet kernels and artifacts introduced by the heterogeneous coverage in oceanic regions of the model.

  15. Constrained inversion as a hypothesis testing tool, what can we learn about the lithosphere?

    NASA Astrophysics Data System (ADS)

    Moorkamp, Max; Stewart, Fishwick; Jones, Alan G.

    2017-04-01

    Inversion of geophysical data constrained by a reference model is typically used to guide the inversion of low resolution data towards a geologically plausible solution. For example, a migrated seismic section can provide the location of lithological boundaries for potential field inversions. Here we consider the inversion of long-period magnetotelluric data constrained by models generated through surface wave inversion. In this case, we do not consider the surface wave model inherently better in any sense and want to guide the magnetotelluric inversion towards this model, but we want to test the hypothesis that both datasets can be explained by models with similar structure. If the hypothesis test is successful, i.e. we can fit the observations with a conductivity model with structural similarity to the seismic model, we have found an alternative explanation compared to the individual inversion and can use the differences to learn about the resolution of the magnetotelluric data and can improve our interpretation. Conversely, if the test refutes our hypothesis of coincident structure, we have found features in the models that are sensed fundamentally different by both methods which is potentially instructive on the nature of the anomalies. We use a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons together with a tomographic model for the region to illustrate and test this approach. Here, various conductive structures have been identified that bridge the Moho. Furthermore, the thickness of the lithosphere inferred from the different methods differs. In both cases the question is in how far this is a result of the ill-posed nature of inversion and in how far these differences can be reconciled. Thus this dataset is an ideal test case for our hypothesis testing approach. Finally, we will demonstrate how we can use the results of the constrained inversion to extract conductivity-velocity relationships in the region and gain further insight into the composition and thermal structure of the lithosphere.

  16. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    NASA Astrophysics Data System (ADS)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.

  17. Unlocking the spatial inversion of large scanning magnetic microscopy datasets

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.

    2013-12-01

    Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.

  18. Regional P-wave Tomography in the Caribbean Region for Plate Reconstruction

    NASA Astrophysics Data System (ADS)

    Li, X.; Bedle, H.; Suppe, J.

    2017-12-01

    The complex plate-tectonic interactions around the Caribbean Sea have been studied and interpreted by many researchers, but questions still remain regarding the formation and subduction history of the region. Here we report current progress towards creating a new regional tomographic model, with better lateral and spatial coverage and higher resolution than has been presented previously. This new model will provide improved constraints on the plate-tectonic evolution around the Caribbean Plate. Our three-dimensional velocity model is created using taut spline parameterization. The inversion is computed by the code of VanDecar (1991), which is based on the ray theory method. The seismic data used in this inversion are absolute P wave arrival times from over 700 global earthquakes that were recorded by over 400 near Caribbean stations. There are over 25000 arrival times that were picked and quality checked within frequency band of 0.01 - 0.6 Hz by using a MATLAB GUI-based software named Crazyseismic. The picked seismic delay time data are analyzed and compared with other studies ahead of doing the inversion model, in order to examine the quality of our dataset. From our initial observations of the delay time data, the more equalized the ray azimuth coverage, the smaller the deviation of the observed travel times from the theoretical travel time. Networks around the NE and SE side of the Caribbean Sea generally have better ray coverage, and smaller delay times. Specifically, seismic rays reaching SE Caribbean networks, such as XT network, generally pass through slabs under South American, Central American, Lesser Antilles, Southwest Caribbean, and the North Caribbean transform boundary, which leads to slightly positive average delay times. In contrast, the Puerto Rico network records seismic rays passing through regions that may lack slabs in the upper mantle and show slightly negative or near zero average delay times. These results agree with previous tomographic models. Based on our delay time observations, slabs and velocity structures near the East side of the Caribbean plate might be better imaged due to its denser ray coverage. More caution in selecting the seismic data for inversion on the west margin of Caribbean will be required to avoid possible smearing effects and artifacts from unequal ray path distributions.

  19. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  20. Objectives and Layout of a High-Resolution X-ray Imaging Crystal Spectrometer for the Large Helical Device (LHD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bitter, M; Gates, D; Monticello, D

    A high-resolution X-ray imaging crystal spectrometer, whose concept was tested on NSTX and Alcator C-Mod, is being designed for LHD. This instrument will record spatially resolved spectra of helium-like Ar16+ and provide ion temperature profiles with spatial and temporal resolutions of < 2 cm and ≥ 10 ms. The stellarator equilibrium reconstruction codes, STELLOPT and PIES, will be used for the tomographic inversion of the spectral data. The spectrometer layout and instrumental features are largely determined by the magnetic field structure of LHD.

  1. Portable Ultrasound Imaging of the Brain for Use in Forward Battlefield Areas

    DTIC Science & Technology

    2011-03-01

    ultrasound measurement of skull thickness and sound speed, phase correction of beam distortion, the tomographic reconstruction algorithm, and the final...produce a coherent imaging source. We propose a corrective technique that will use ultrasound-based phased -array beam correction [3], optimized...not expected to be a significant factor in the ability to phase -correct the imaging beam . In addition to planning (2.2.1), the data is also be used

  2. Three-dimensional propagation in near-field tomographic X-ray phase retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruhlandt, Aike, E-mail: aruhlan@gwdg.de; Salditt, Tim

    An extension of phase retrieval algorithms for near-field X-ray (propagation) imaging to three dimensions is presented, enhancing the quality of the reconstruction by exploiting previously unused three-dimensional consistency constraints. This paper presents an extension of phase retrieval algorithms for near-field X-ray (propagation) imaging to three dimensions, enhancing the quality of the reconstruction by exploiting previously unused three-dimensional consistency constraints. The approach is based on a novel three-dimensional propagator and is derived for the case of optically weak objects. It can be easily implemented in current phase retrieval architectures, is computationally efficient and reduces the need for restrictive prior assumptions, resultingmore » in superior reconstruction quality.« less

  3. Quantitative analysis of SMEX'02 AIRSAR data for soil moisture inversion

    NASA Technical Reports Server (NTRS)

    Zyl, J. J. van; Njoku, E.; Jackson, T.

    2003-01-01

    This paper discusses in detail the characteristics of the AIRSAR data acquired, and provides an initial quantitative assessment of the accuracy of the radar inversion algorithms under these vegetated conditions.

  4. Concurrence control for transactions with priorities

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith

    1989-01-01

    Priority inversion occurs when a process is delayed by the actions of another process with less priority. With atomic transactions, the concurrency control mechanism can cause delays, and without taking priorities into account can be a source of priority inversion. Three traditional concurrency control algorithms are extended so that they are free from unbounded priority inversion.

  5. Inverse scattering approach to improving pattern recognition

    NASA Astrophysics Data System (ADS)

    Chapline, George; Fu, Chi-Yung

    2005-05-01

    The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the "wake-sleep" algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensory feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.

  6. A Fine-Grained Pipelined Implementation for Large-Scale Matrix Inversion on FPGA

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Dou, Yong; Zhao, Jianxun; Xia, Fei; Lei, Yuanwu; Tang, Yuxing

    Large-scale matrix inversion play an important role in many applications. However to the best of our knowledge, there is no FPGA-based implementation. In this paper, we explore the possibility of accelerating large-scale matrix inversion on FPGA. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for matrix inversion. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 12 PEs can be integrated into an Altera StratixII EP2S130F1020C5 FPGA on our self-designed board. Experimental results show that a factor of 2.6 speedup and the maximum power-performance of 41 can be achieved compare to Pentium Dual CPU with double SSE threads.

  7. Inverse Scattering Approach to Improving Pattern Recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapline, G; Fu, C

    2005-02-15

    The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the ''wake-sleep'' algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensorymore » feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.« less

  8. Regularized inversion of controlled source audio-frequency magnetotelluric data in horizontally layered transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun

    2014-04-01

    We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitanidis, Peter

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less

  10. Modeling and inversion Matlab algorithms for resistivity, induced polarization and seismic data

    NASA Astrophysics Data System (ADS)

    Karaoulis, M.; Revil, A.; Minsley, B. J.; Werkema, D. D.

    2011-12-01

    M. Karaoulis (1), D.D. Werkema (3), A. Revil (1,2), A., B. Minsley (4), (1) Colorado School of Mines, Dept. of Geophysics, Golden, CO, USA. (2) ISTerre, CNRS, UMR 5559, Université de Savoie, Equipe Volcan, Le Bourget du Lac, France. (3) U.S. EPA, ORD, NERL, ESD, CMB, Las Vegas, Nevada, USA . (4) USGS, Federal Center, Lakewood, 10, 80225-0046, CO. Abstract We propose 2D and 3D forward modeling and inversion package for DC resistivity, time domain induced polarization (IP), frequency-domain IP, and seismic refraction data. For the resistivity and IP case, discretization is based on rectangular cells, where each cell has as unknown resistivity in the case of DC modelling, resistivity and chargeability in the time domain IP modelling, and complex resistivity in the spectral IP modelling. The governing partial-differential equations are solved with the finite element method, which can be applied to both real and complex variables that are solved for. For the seismic case, forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wavepaths are materialized by Fresnel volumes rather than by conventional rays. This approach accounts for complicated velocity models and is advantageous because it considers frequency effects on the velocity resolution. The inversion can accommodate data at a single time step, or as a time-lapse dataset if the geophysical data are gathered for monitoring purposes. The aim of time-lapse inversion is to find the change in the velocities or resistivities of each model cell as a function of time. Different time-lapse algorithms can be applied such as independent inversion, difference inversion, 4D inversion, and 4D active time constraint inversion. The forward algorithms are benchmarked against analytical solutions and inversion results are compared with existing ones. The algorithms are packaged as Matlab codes with a simple Graphical User Interface. Although the code is parallelized for multi-core cpus, it is not as fast as machine code. In the case of large datasets, someone should consider transferring parts of the code to C or Fortran through mex files. This code is available through EPA's website on the following link http://www.epa.gov/esd/cmb/GeophysicsWebsite/index.html Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.

  11. Airway and tissue loading in postinterrupter response of the respiratory system - an identification algorithm construction.

    PubMed

    Jablonski, Ireneusz; Mroczka, Janusz

    2010-01-01

    The paper offers an enhancement of the classical interrupter technique algorithm dedicated to respiratory mechanics measurements. Idea consists in exploitation of information contained in postocclusional transient states during indirect measurement of parameter characteristics by model identification. It needs the adequacy of an inverse analogue to general behavior of the real system and a reliable algorithm of parameter estimation. The second one was a subject of reported works, which finally showed the potential of the approach to separation of airway and tissue response in a case of short-term excitation by interrupter valve operation. Investigations were conducted in a regime of forward-inverse computer experiment.

  12. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  13. Rayleigh wave dispersion curve inversion by using particle swarm optimization and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Buyuk, Ersin; Zor, Ekrem; Karaman, Abdullah

    2017-04-01

    Inversion of surface wave dispersion curves with its highly nonlinear nature has some difficulties using traditional linear inverse methods due to the need and strong dependence to the initial model, possibility of trapping in local minima and evaluation of partial derivatives. There are some modern global optimization methods to overcome of these difficulties in surface wave analysis such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). GA is based on biologic evolution consisting reproduction, crossover and mutation operations, while PSO algorithm developed after GA is inspired from the social behaviour of birds or fish of swarms. Utility of these methods require plausible convergence rate, acceptable relative error and optimum computation cost that are important for modelling studies. Even though PSO and GA processes are similar in appearence, the cross-over operation in GA is not used in PSO and the mutation operation is a stochastic process for changing the genes within chromosomes in GA. Unlike GA, the particles in PSO algorithm changes their position with logical velocities according to particle's own experience and swarm's experience. In this study, we applied PSO algorithm to estimate S wave velocities and thicknesses of the layered earth model by using Rayleigh wave dispersion curve and also compared these results with GA and we emphasize on the advantage of using PSO algorithm for geophysical modelling studies considering its rapid convergence, low misfit error and computation cost.

  14. Performance comparisons on spatial lattice algorithm and direct matrix inverse method with application to adaptive arrays processing

    NASA Technical Reports Server (NTRS)

    An, S. H.; Yao, K.

    1986-01-01

    Lattice algorithm has been employed in numerous adaptive filtering applications such as speech analysis/synthesis, noise canceling, spectral analysis, and channel equalization. In this paper the application to adaptive-array processing is discussed. The advantages are fast convergence rate as well as computational accuracy independent of the noise and interference conditions. The results produced by this technique are compared to those obtained by the direct matrix inverse method.

  15. Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang

    2017-07-01

    MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.

  16. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    PubMed

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  17. A Semianalytical Ocean Color Inversion Algorithm with Explicit Water Column Depth and Substrate Reflectance Parameterization

    NASA Technical Reports Server (NTRS)

    Mckinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.

    2015-01-01

    A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.

  18. Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms

    DTIC Science & Technology

    2004-08-01

    inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared

  19. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  20. Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Loyola, D. G.

    2017-12-01

    The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.

Top