Sample records for waveform inversion algorithm

  1. Feasibility of waveform inversion of Rayleigh waves for shallow shear-wave velocity using a genetic algorithm

    USGS Publications Warehouse

    Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.

    2011-01-01

    Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.

  2. Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.

    PubMed

    Rao, Ying; Wang, Yanghua

    2017-08-17

    In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.

  3. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    NASA Astrophysics Data System (ADS)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).

  4. Investigation of the reconstruction accuracy of guided wave tomography using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Rao, Jing; Ratassepp, Madis; Fan, Zheng

    2017-07-01

    Guided wave tomography is a promising tool to accurately determine the remaining wall thicknesses of corrosion damages, which are among the major concerns for many industries. Full Waveform Inversion (FWI) algorithm is an attractive guided wave tomography method, which uses a numerical forward model to predict the waveform of guided waves when propagating through corrosion defects, and an inverse model to reconstruct the thickness map from the ultrasonic signals captured by transducers around the defect. This paper discusses the reconstruction accuracy of the FWI algorithm on plate-like structures by using simulations as well as experiments. It was shown that this algorithm can obtain a resolution of around 0.7 wavelengths for defects with smooth depth variations from the acoustic modeling data, and about 1.5-2 wavelengths from the elastic modeling data. Further analysis showed that the reconstruction accuracy is also dependent on the shape of the defect. It was demonstrated that the algorithm maintains the accuracy in the case of multiple defects compared to conventional algorithms based on Born approximation.

  5. Micro-seismic waveform matching inversion based on gravitational search algorithm and parallel computation

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Xing, H. L.

    2016-12-01

    Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation

  6. Three-dimensional full waveform inversion of short-period teleseismic wavefields based upon the SEM-DSM hybrid method

    NASA Astrophysics Data System (ADS)

    Monteiller, Vadim; Chevrot, Sébastien; Komatitsch, Dimitri; Wang, Yi

    2015-08-01

    We present a method for high-resolution imaging of lithospheric structures based on full waveform inversion of teleseismic waveforms. We model the propagation of seismic waves using our recently developed direct solution method/spectral-element method hybrid technique, which allows us to simulate the propagation of short-period teleseismic waves through a regional 3-D model. We implement an iterative quasi-Newton method based upon the L-BFGS algorithm, where the gradient of the misfit function is computed using the adjoint-state method. Compared to gradient or conjugate-gradient methods, the L-BFGS algorithm has a much faster convergence rate. We illustrate the potential of this method on a synthetic test case that consists of a crustal model with a crustal discontinuity at 25 km depth and a sharp Moho jump. This model contains short- and long-wavelength heterogeneities along the lateral and vertical directions. The iterative inversion starts from a smooth 1-D model derived from the IASP91 reference Earth model. We invert both radial and vertical component waveforms, starting from long-period signals filtered at 10 s and gradually decreasing the cut-off period down to 1.25 s. This multiscale algorithm quickly converges towards a model that is very close to the true model, in contrast to inversions involving short-period waveforms only, which always get trapped into a local minimum of the cost function.

  7. Crustal velocity structure of central Gansu Province from regional seismic waveform inversion using firework algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yanyang; Wang, Yanbin; Zhang, Yuansheng

    2017-04-01

    The firework algorithm (FWA) is a novel swarm intelligence-based method recently proposed for the optimization of multi-parameter, nonlinear functions. Numerical waveform inversion experiments using a synthetic model show that the FWA performs well in both solution quality and efficiency. We apply the FWA in this study to crustal velocity structure inversion using regional seismic waveform data of central Gansu on the northeastern margin of the Qinghai-Tibet plateau. Seismograms recorded from the moment magnitude ( M W) 5.4 Minxian earthquake enable obtaining an average crustal velocity model for this region. We initially carried out a series of FWA robustness tests in regional waveform inversion at the same earthquake and station positions across the study region, inverting two velocity structure models, with and without a low-velocity crustal layer; the accuracy of our average inversion results and their standard deviations reveal the advantages of the FWA for the inversion of regional seismic waveforms. We applied the FWA across our study area using three component waveform data recorded by nine broadband permanent seismic stations with epicentral distances ranging between 146 and 437 km. These inversion results show that the average thickness of the crust in this region is 46.75 km, while thicknesses of the sedimentary layer, and the upper, middle, and lower crust are 3.15, 15.69, 13.08, and 14.83 km, respectively. Results also show that the P-wave velocities of these layers and the upper mantle are 4.47, 6.07, 6.12, 6.87, and 8.18 km/s, respectively.

  8. Laplace-domain waveform modeling and inversion for the 3D acoustic-elastic coupled media

    NASA Astrophysics Data System (ADS)

    Shin, Jungkyun; Shin, Changsoo; Calandra, Henri

    2016-06-01

    Laplace-domain waveform inversion reconstructs long-wavelength subsurface models by using the zero-frequency component of damped seismic signals. Despite the computational advantages of Laplace-domain waveform inversion over conventional frequency-domain waveform inversion, an acoustic assumption and an iterative matrix solver have been used to invert 3D marine datasets to mitigate the intensive computing cost. In this study, we develop a Laplace-domain waveform modeling and inversion algorithm for 3D acoustic-elastic coupled media by using a parallel sparse direct solver library (MUltifrontal Massively Parallel Solver, MUMPS). We precisely simulate a real marine environment by coupling the 3D acoustic and elastic wave equations with the proper boundary condition at the fluid-solid interface. In addition, we can extract the elastic properties of the Earth below the sea bottom from the recorded acoustic pressure datasets. As a matrix solver, the parallel sparse direct solver is used to factorize the non-symmetric impedance matrix in a distributed memory architecture and rapidly solve the wave field for a number of shots by using the lower and upper matrix factors. Using both synthetic datasets and real datasets obtained by a 3D wide azimuth survey, the long-wavelength component of the P-wave and S-wave velocity models is reconstructed and the proposed modeling and inversion algorithm are verified. A cluster of 80 CPU cores is used for this study.

  9. Waveform inversion with source encoding for breast sound speed reconstruction in ultrasound computed tomography.

    PubMed

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  10. Seismic waveform inversion best practices: regional, global and exploration test cases

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan; Tromp, Jeroen

    2016-09-01

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.

  11. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  12. Workflows for Full Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  13. Breast ultrasound computed tomography using waveform inversion with source encoding

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  14. Centroid-moment tensor inversions using high-rate GPS waveforms

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.

    2012-10-01

    Displacement time-series recorded by Global Positioning System (GPS) receivers are a new type of near-field waveform observation of the seismic source. We have developed an inversion method which enables the recovery of an earthquake's mechanism and centroid coordinates from such data. Our approach is identical to that of the 'classical' Centroid-Moment Tensor (CMT) algorithm, except that we forward model the seismic wavefield using a method that is amenable to the efficient computation of synthetic GPS seismograms and their partial derivatives. We demonstrate the validity of our approach by calculating CMT solutions using 1 Hz GPS data for two recent earthquakes in Japan. These results are in good agreement with independently determined source models of these events. With wider availability of data, we envisage the CMT algorithm providing a tool for the systematic inversion of GPS waveforms, as is already the case for teleseismic data. Furthermore, this general inversion method could equally be applied to other near-field earthquake observations such as those made using accelerometers.

  15. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  16. The Effect of Flow Velocity on Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Lee, D.; Shin, S.; Chung, W.; Ha, J.; Lim, Y.; Kim, S.

    2017-12-01

    The waveform inversion is a velocity modeling technique that reconstructs accurate subsurface physical properties. Therefore, using the model in its final, updated version, we generated data identical to modeled data. Flow velocity, like several other factors, affects observed data in seismic exploration. Despite this, there is insufficient research on its relationship with waveform inversion. In this study, the generated synthetic data considering flow velocity was factored in waveform inversion and the influence of flow velocity in waveform inversion was analyzed. Measuring the flow velocity generally requires additional equipment. However, for situations where only seismic data was available, flow velocity was calculated by fixed-point iteration method using direct wave in observed data. Further, a new waveform inversion was proposed, which can be applied to the calculated flow velocity. We used a wave equation, which can work with the flow velocities used in the study by Käser and Dumbser. Further, we enhanced the efficiency of computation by applying the back-propagation method. To verify the proposed algorithm, six different data sets were generated using the Marmousi2 model; each of these data sets used different flow velocities in the range 0-50, i.e., 0, 2, 5, 10, 25, and 50. Thereafter, the inversion results from these data sets along with the results without the use of flow velocity were compared and analyzed. In this study, we analyzed the results of waveform inversion after flow velocity has been factored in. It was demonstrated that the waveform inversion is not affected significantly when the flow velocity is of smaller value. However, when the flow velocity has a large value, factoring it in the waveform inversion produces superior results. This research was supported by the Basic Research Project(17-3312, 17-3313) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  17. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  18. Joint Inversion of Source Location and Source Mechanism of Induced Microseismics

    NASA Astrophysics Data System (ADS)

    Liang, C.

    2014-12-01

    Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.

  19. A multi-frequency receiver function inversion approach for crustal velocity structure

    NASA Astrophysics Data System (ADS)

    Li, Xuelei; Li, Zhiwei; Hao, Tianyao; Wang, Sheng; Xing, Jian

    2017-05-01

    In order to constrain the crustal velocity structures better, we developed a new nonlinear inversion approach based on multi-frequency receiver function waveforms. With the global optimizing algorithm of Differential Evolution (DE), low-frequency receiver function waveforms can primarily constrain large-scale velocity structures, while high-frequency receiver function waveforms show the advantages in recovering small-scale velocity structures. Based on the synthetic tests with multi-frequency receiver function waveforms, the proposed approach can constrain both long- and short-wavelength characteristics of the crustal velocity structures simultaneously. Inversions with real data are also conducted for the seismic stations of KMNB in southeast China and HYB in Indian continent, where crustal structures have been well studied by former researchers. Comparisons of inverted velocity models from previous and our studies suggest good consistency, but better waveform fitness with fewer model parameters are achieved by our proposed approach. Comprehensive tests with synthetic and real data suggest that the proposed inversion approach with multi-frequency receiver function is effective and robust in inverting the crustal velocity structures.

  20. Lane marking detection based on waveform analysis and CNN

    NASA Astrophysics Data System (ADS)

    Ye, Yang Yang; Chen, Hou Jin; Hao, Xiao Li

    2017-06-01

    Lane markings detection is a very important part of the ADAS to avoid traffic accidents. In order to obtain accurate lane markings, in this work, a novel and efficient algorithm is proposed, which analyses the waveform generated from the road image after inverse perspective mapping (IPM). The algorithm includes two main stages: the first stage uses an image preprocessing including a CNN to reduce the background and enhance the lane markings. The second stage obtains the waveform of the road image and analyzes the waveform to get lanes. The contribution of this work is that we introduce local and global features of the waveform to detect the lane markings. The results indicate the proposed method is robust in detecting and fitting the lane markings.

  1. anisotropic microseismic focal mechanism inversion by waveform imaging matching

    NASA Astrophysics Data System (ADS)

    Wang, L.; Chang, X.; Wang, Y.; Xue, Z.

    2016-12-01

    The focal mechanism is one of the most important parameters in source inversion, for both natural earthquakes and human-induced seismic events. It has been reported to be useful for understanding stress distribution and evaluating the fracturing effect. The conventional focal mechanism inversion method picks the first arrival waveform of P wave. This method assumes the source as a Double Couple (DC) type and the media isotropic, which is usually not the case for induced seismic focal mechanism inversion. For induced seismic events, the inappropriate source and media model in inversion processing, by introducing ambiguity or strong simulation errors, will seriously reduce the inversion effectiveness. First, the focal mechanism contains significant non-DC source type. Generally, the source contains three components: DC, isotropic (ISO) and the compensated linear vector dipole (CLVD), which makes focal mechanisms more complicated. Second, the anisotropy of media will affect travel time and waveform to generate inversion bias. The common way to describe focal mechanism inversion is based on moment tensor (MT) inversion which can be decomposed into the combination of DC, ISO and CLVD components. There are two ways to achieve MT inversion. The wave-field migration method is applied to achieve moment tensor imaging. This method can construct elements imaging of MT in 3D space without picking the first arrival, but the retrieved MT value is influenced by imaging resolution. The full waveform inversion is employed to retrieve MT. In this method, the source position and MT can be reconstructed simultaneously. However, this method needs vast numerical calculation. Moreover, the source position and MT also influence each other in the inversion process. In this paper, the waveform imaging matching (WIM) method is proposed, which combines source imaging with waveform inversion for seismic focal mechanism inversion. Our method uses the 3D tilted transverse isotropic (TTI) elastic wave equation to approximate wave propagating in anisotropic media. First, a source imaging procedure is employed to obtain the source position. Second, we refine a waveform inversion algorithm to retrieve MT. We also use a microseismic data set recorded in surface acquisition to test our method.

  2. Acoustic and elastic waveform inversion best practices

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.

  3. Full waveform inversion using a decomposed single frequency component from a spectrogram

    NASA Astrophysics Data System (ADS)

    Ha, Jiho; Kim, Seongpil; Koo, Namhyung; Kim, Young-Ju; Woo, Nam-Sub; Han, Sang-Mok; Chung, Wookeen; Shin, Sungryul; Shin, Changsoo; Lee, Jaejoon

    2018-06-01

    Although many full waveform inversion methods have been developed to construct velocity models of subsurface, various approaches have been presented to obtain an inversion result with long-wavelength features even though seismic data lacking low-frequency components were used. In this study, a new full waveform inversion algorithm was proposed to recover a long-wavelength velocity model that reflects the inherent characteristics of each frequency component of seismic data using a single-frequency component decomposed from the spectrogram. We utilized the wavelet transform method to obtain the spectrogram, and the decomposed signal from the spectrogram was used as transformed data. The Gauss-Newton method with the diagonal elements of an approximate Hessian matrix was used to update the model parameters at each iteration. Based on the results of time-frequency analysis in the spectrogram, numerical tests with some decomposed frequency components were performed using a modified SEG/EAGE salt dome (A-A‧) line to demonstrate the feasibility of the proposed inversion algorithm. This demonstrated that a reasonable inverted velocity model with long-wavelength structures can be obtained using a single frequency component. It was also confirmed that when strong noise occurs in part of the frequency band, it is feasible to obtain a long-wavelength velocity model from the noise data with a frequency component that is less affected by the noise. Finally, it was confirmed that the results obtained from the spectrogram inversion can be used as an initial velocity model in conventional inversion methods.

  4. Digital Oblique Remote Ionospheric Sensing (DORIS) Program Development

    DTIC Science & Technology

    1992-04-01

    waveforms. A new with the ARTIST software (Reinisch and Iluang. autoscaling technique for oblique ionograms 1983, Gamache et al., 1985) which is...development and performance of a complete oblique ionogram autoscaling and inversion algorithm is presented. The inver.i-,n algorithm uses a three...OTIH radar. 14. SUBJECT TERMS 15. NUMBER OF PAGES Oblique Propagation; Oblique lonogram Autoscaling ; i Electron Density Profile Inversion; Simulated 16

  5. Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; Huang, Lianjie

    2015-01-28

    Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less

  6. Teleseismic tomography for imaging Earth's upper mantle

    NASA Astrophysics Data System (ADS)

    Aktas, Kadircan

    Teleseismic tomography is an important imaging tool in earthquake seismology, used to characterize lithospheric structure beneath a region of interest. In this study I investigate three different tomographic techniques applied to real and synthetic teleseismic data, with the aim of imaging the velocity structure of the upper mantle. First, by applying well established traveltime tomographic techniques to teleseismic data from southern Ontario, I obtained high-resolution images of the upper mantle beneath the lower Great Lakes. Two salient features of the 3D models are: (1) a patchy, NNW-trending low-velocity region, and (2) a linear, NE-striking high-velocity anomaly. I interpret the high-velocity anomaly as a possible relict slab associated with ca. 1.25 Ga subduction, whereas the low-velocity anomaly is interpreted as a zone of alteration and metasomatism associated with the ascent of magmas that produced the Late Cretaceous Monteregian plutons. The next part of the thesis is concerned with adaptation of existing full-waveform tomographic techniques for application to teleseismic body-wave observations. The method used here is intended to be complementary to traveltime tomography, and to take advantage of efficient frequency-domain methodologies that have been developed for inverting large controlled-source datasets. Existing full-waveform acoustic modelling and inversion codes have been modified to handle plane waves impinging from the base of the lithospheric model at a known incidence angle. A processing protocol has been developed to prepare teleseismic observations for the inversion algorithm. To assess the validity of the acoustic approximation, the processing procedure and modelling-inversion algorithm were tested using synthetic seismograms computed using an elastic Kirchhoff integral method. These tests were performed to evaluate the ability of the frequency-domain full-waveform inversion algorithm to recover topographic variations of the Moho under a variety of realistic scenarios. Results show that frequency-domain full-waveform tomography is generally successful in recovering both sharp and discontinuous features. Thirdly, I developed a new method for creating an initial background velocity model for the inversion algorithm, which is sufficiently close to the true model so that convergence is likely to be achieved. I adapted a method named Deformable Layer Tomography (DLT), which adjusts interfaces between layers rather than velocities within cells. I applied this method to a simple model comprising a single uniform crustal layer and a constant-velocity mantle, separated by an irregular Moho interface. A series of tests was performed to evaluate the sensitivity of the DLT algorithm; the results show that my algorithm produces useful results within a realistic range of incident-wave obliquity, incidence angle and signal-to-noise level. Keywords. Teleseismic tomography, full waveform tomography, deformable layer tomography, lower Great Lakes, crust and upper mantle.

  7. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  8. The shift-invariant discrete wavelet transform and application to speech waveform analysis.

    PubMed

    Enders, Jörg; Geng, Weihua; Li, Peijun; Frazier, Michael W; Scholl, David J

    2005-04-01

    The discrete wavelet transform may be used as a signal-processing tool for visualization and analysis of nonstationary, time-sampled waveforms. The highly desirable property of shift invariance can be obtained at the cost of a moderate increase in computational complexity, and accepting a least-squares inverse (pseudoinverse) in place of a true inverse. A new algorithm for the pseudoinverse of the shift-invariant transform that is easier to implement in array-oriented scripting languages than existing algorithms is presented together with self-contained proofs. Representing only one of the many and varied potential applications, a recorded speech waveform illustrates the benefits of shift invariance with pseudoinvertibility. Visualization shows the glottal modulation of vowel formants and frication noise, revealing secondary glottal pulses and other waveform irregularities. Additionally, performing sound waveform editing operations (i.e., cutting and pasting sections) on the shift-invariant wavelet representation automatically produces quiet, click-free section boundaries in the resulting sound. The capabilities of this wavelet-domain editing technique are demonstrated by changing the rate of a recorded spoken word. Individual pitch periods are repeated to obtain a half-speed result, and alternate individual pitch periods are removed to obtain a double-speed result. The original pitch and formant frequencies are preserved. In informal listening tests, the results are clear and understandable.

  9. The shift-invariant discrete wavelet transform and application to speech waveform analysis

    NASA Astrophysics Data System (ADS)

    Enders, Jörg; Geng, Weihua; Li, Peijun; Frazier, Michael W.; Scholl, David J.

    2005-04-01

    The discrete wavelet transform may be used as a signal-processing tool for visualization and analysis of nonstationary, time-sampled waveforms. The highly desirable property of shift invariance can be obtained at the cost of a moderate increase in computational complexity, and accepting a least-squares inverse (pseudoinverse) in place of a true inverse. A new algorithm for the pseudoinverse of the shift-invariant transform that is easier to implement in array-oriented scripting languages than existing algorithms is presented together with self-contained proofs. Representing only one of the many and varied potential applications, a recorded speech waveform illustrates the benefits of shift invariance with pseudoinvertibility. Visualization shows the glottal modulation of vowel formants and frication noise, revealing secondary glottal pulses and other waveform irregularities. Additionally, performing sound waveform editing operations (i.e., cutting and pasting sections) on the shift-invariant wavelet representation automatically produces quiet, click-free section boundaries in the resulting sound. The capabilities of this wavelet-domain editing technique are demonstrated by changing the rate of a recorded spoken word. Individual pitch periods are repeated to obtain a half-speed result, and alternate individual pitch periods are removed to obtain a double-speed result. The original pitch and formant frequencies are preserved. In informal listening tests, the results are clear and understandable. .

  10. Inversion of ocean-bottom seismometer (OBS) waveforms for oceanic crust structure: a synthetic study

    NASA Astrophysics Data System (ADS)

    Li, Xueyan; Wang, Yanbin; Chen, Yongshun John

    2016-08-01

    The waveform inversion method is applied—using synthetic ocean-bottom seismometer (OBS) data—to study oceanic crust structure. A niching genetic algorithm (NGA) is used to implement the inversion for the thickness and P-wave velocity of each layer, and to update the model by minimizing the objective function, which consists of the misfit and cross-correlation of observed and synthetic waveforms. The influence of specific NGA method parameters is discussed, and suitable values are presented. The NGA method works well for various observation systems, such as those with irregular and sparse distribution of receivers as well as single receiver systems. A strategy is proposed to accelerate the convergence rate by a factor of five with no increase in computational complexity; this is achieved using a first inversion with several generations to impose a restriction on the preset range of each parameter and then conducting a second inversion with the new range. Despite the successes of this method, its usage is limited. A shallow water layer is not favored because the direct wave in water will suppress the useful reflection signals from the crust. A more precise calculation of the air-gun source signal should be considered in order to better simulate waveforms generated in realistic situations; further studies are required to investigate this issue.

  11. Visco-elastic controlled-source full waveform inversion without surface waves

    NASA Astrophysics Data System (ADS)

    Paschke, Marco; Krause, Martin; Bleibinhaus, Florian

    2016-04-01

    We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.

  12. Full waveform inversion in the frequency domain using classified time-domain residual wavefields

    NASA Astrophysics Data System (ADS)

    Son, Woohyun; Koo, Nam-Hyung; Kim, Byoung-Yeop; Lee, Ho-Young; Joo, Yonghwan

    2017-04-01

    We perform the acoustic full waveform inversion in the frequency domain using residual wavefields that have been separated in the time domain. We sort the residual wavefields in the time domain according to the order of absolute amplitudes. Then, the residual wavefields are separated into several groups in the time domain. To analyze the characteristics of the residual wavefields, we compare the residual wavefields of conventional method with those of our residual separation method. From the residual analysis, the amplitude spectrum obtained from the trace before separation appears to have little energy at the lower frequency bands. However, the amplitude spectrum obtained from our strategy is regularized by the separation process, which means that the low-frequency components are emphasized. Therefore, our method helps to emphasize low-frequency components of residual wavefields. Then, we generate the frequency-domain residual wavefields by taking the Fourier transform of the separated time-domain residual wavefields. With these wavefields, we perform the gradient-based full waveform inversion in the frequency domain using back-propagation technique. Through a comparison of gradient directions, we confirm that our separation method can better describe the sub-salt image than the conventional approach. The proposed method is tested on the SEG/EAGE salt-dome model. The inversion results show that our algorithm is better than the conventional gradient based waveform inversion in the frequency domain, especially for deeper parts of the velocity model.

  13. A Gauss-Newton full-waveform inversion in PML-truncated domains using scalar probing waves

    NASA Astrophysics Data System (ADS)

    Pakravan, Alireza; Kang, Jun Won; Newtson, Craig M.

    2017-12-01

    This study considers the characterization of subsurface shear wave velocity profiles in semi-infinite media using scalar waves. Using surficial responses caused by probing waves, a reconstruction of the material profile is sought using a Gauss-Newton full-waveform inversion method in a two-dimensional domain truncated by perfectly matched layer (PML) wave-absorbing boundaries. The PML is introduced to limit the semi-infinite extent of the half-space and to prevent reflections from the truncated boundaries. A hybrid unsplit-field PML is formulated in the inversion framework to enable more efficient wave simulations than with a fully mixed PML. The full-waveform inversion method is based on a constrained optimization framework that is implemented using Karush-Kuhn-Tucker (KKT) optimality conditions to minimize the objective functional augmented by PML-endowed wave equations via Lagrange multipliers. The KKT conditions consist of state, adjoint, and control problems, and are solved iteratively to update the shear wave velocity profile of the PML-truncated domain. Numerical examples show that the developed Gauss-Newton inversion method is accurate enough and more efficient than another inversion method. The algorithm's performance is demonstrated by the numerical examples including the case of noisy measurement responses and the case of reduced number of sources and receivers.

  14. Medium change based image estimation from application of inverse algorithms to coda wave measurements

    NASA Astrophysics Data System (ADS)

    Zhan, Hanyu; Jiang, Hanwan; Jiang, Ruinian

    2018-03-01

    Perturbations worked as extra scatters will cause coda waveform distortions; thus, coda wave with long propagation time and traveling path are sensitive to micro-defects in strongly heterogeneous media such as concretes. In this paper, we conduct varied external loads on a life-size concrete slab which contains multiple existing micro-cracks, and a couple of sources and receivers are installed to collect coda wave signals. The waveform decorrelation coefficients (DC) at different loads are calculated for all available source-receiver pair measurements. Then inversions of the DC results are applied to estimate the associated distribution density values in three-dimensional regions through kernel sensitivity model and least-square algorithms, which leads to the images indicating the micro-cracks positions. This work provides an efficiently non-destructive approach to detect internal defects and damages of large-size concrete structures.

  15. Full-waveform inversion of GPR data for civil engineering applications

    NASA Astrophysics Data System (ADS)

    van der Kruk, Jan; Kalogeropoulos, Alexis; Hugenschmidt, Johannes; Klotzsche, Anja; Busch, Sebastian; Vereecken, Harry

    2014-05-01

    Conventional GPR ray-based techniques are often limited in their capability to image complex structures due to the pertaining approximations. Due to the increased computational power, it is becoming more easy to use modeling and inversion tools that explicitly take into account the detailed electromagnetic wave propagation characteristics. In this way, new civil engineering application avenues are opening up that enable an improved high resolution imaging of quantitative medium properties. In this contribution, we show recent developments that enable the full-waveform inversion of off-ground, on-ground and crosshole GPR data. For a successful inversion, a proper start model must be used that generates synthetic data that overlaps the measured data with at least half a wavelength. In addition, the GPR system must be calibrated such that an effective wavelet is obtained that encompasses the complexity of the GPR source and receiver antennas. Simple geometries such as horizontal layers can be described with a limited number of model parameters, which enable the use of a combined global and local search using the Simplex search algorithm. This approach has been implemented for the full-waveform inversion of off-ground and on-ground GPR data measured over horizontally layered media. In this way, an accurate 3D frequency domain forward model of Maxwell's equation can be used where the integral representation of the electric field is numerically evaluated. The full-waveform inversion (FWI) for a large number of unknowns uses gradient-based optimization methods where a 3D to 2D conversion is used to apply this method to experimental data. Off-ground GPR data, measured over homogeneous concrete specimens, were inverted using the full-waveform inversion. In contrast to traditional ray-based techniques we were able to obtain quantitative values for the permittivity and conductivity and in this way distinguish between moisture and chloride effects. For increasing chloride content increasing frequency-dependent conductivity values were obtained. The off-ground full-waveform inversion was extended to invert for positive and negative gradients in conductivity and the conductivity gradient direction could be correctly identified. Experimental specimen containing gradients were generated by exposing a concrete slab to controlled wetting-drying cycles using a saline solution. Full-waveform inversion of the measured data correctly identified the conductivity gradient direction which was confirmed by destructive analysis. On-ground CMP GPR data measured over a concrete layer overlying a metal plate show interfering multiple reflections, which indicates that the structure acts as a waveguide. Calculation of the phase-velocity spectrum shows the presence of several higher order modes. Whereas the dispersion inversion returns the thickness and layer height, the full-waveform inversion was also able to estimate quantitative conductivity values. This abstract is a contribution to COST Action TU1208

  16. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Chen, W.; Ni, S.; Wang, Z.

    2011-12-01

    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  17. Full seismic waveform tomography for upper-mantle structure in the Australasian region using adjoint methods

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; Kennett, Brian L. N.; Igel, Heiner; Bunge, Hans-Peter

    2009-12-01

    We present a full seismic waveform tomography for upper-mantle structure in the Australasian region. Our method is based on spectral-element simulations of seismic wave propagation in 3-D heterogeneous earth models. The accurate solution of the forward problem ensures that waveform misfits are solely due to as yet undiscovered Earth structure and imprecise source descriptions, thus leading to more realistic tomographic images and source parameter estimates. To reduce the computational costs, we implement a long-wavelength equivalent crustal model. We quantify differences between the observed and the synthetic waveforms using time-frequency (TF) misfits. Their principal advantages are the separation of phase and amplitude misfits, the exploitation of complete waveform information and a quasi-linear relation to 3-D Earth structure. Fréchet kernels for the TF misfits are computed via the adjoint method. We propose a simple data compression scheme and an accuracy-adaptive time integration of the wavefields that allows us to reduce the storage requirements of the adjoint method by almost two orders of magnitude. To minimize the waveform phase misfit, we implement a pre-conditioned conjugate gradient algorithm. Amplitude information is incorporated indirectly by a restricted line search. This ensures that the cumulative envelope misfit does not increase during the inversion. An efficient pre-conditioner is found empirically through numerical experiments. It prevents the concentration of structural heterogeneity near the sources and receivers. We apply our waveform tomographic method to ~1000 high-quality vertical-component seismograms, recorded in the Australasian region between 1993 and 2008. The waveforms comprise fundamental- and higher-mode surface and long-period S body waves in the period range from 50 to 200 s. To improve the convergence of the algorithm, we implement a 3-D initial model that contains the long-wavelength features of the Australasian region. Resolution tests indicate that our algorithm converges after around 10 iterations and that both long- and short-wavelength features in the uppermost mantle are well resolved. There is evidence for effects related to the non-linearity in the inversion procedure. After 11 iterations we fit the data waveforms acceptably well; with no significant further improvements to be expected. During the inversion the total fitted seismogram length increases by 46 per cent, providing a clear indication of the efficiency and consistency of the iterative optimization algorithm. The resulting SV-wave velocity model reveals structural features of the Australasian upper mantle with great detail. We confirm the existence of a pronounced low-velocity band along the eastern margin of the continent that can be clearly distinguished against Precambrian Australia and the microcontinental Lord Howe Rise. The transition from Precambrian to Phanerozoic Australia (the Tasman Line) appears to be sharp down to at least 200 km depth. It mostly occurs further east of where it is inferred from gravity and magnetic anomalies. Also clearly visible are the Archean and Proterozoic cratons, the northward continuation of the continent and anomalously low S-wave velocities in the upper mantle in central Australia. This is, to the best of our knowledge, the first application of non-linear full seismic waveform tomography to a continental-scale problem.

  18. Optimization of contrast resolution by genetic algorithm in ultrasound tissue harmonic imaging.

    PubMed

    Ménigot, Sébastien; Girault, Jean-Marc

    2016-09-01

    The development of ultrasound imaging techniques such as pulse inversion has improved tissue harmonic imaging. Nevertheless, no recommendation has been made to date for the design of the waveform transmitted through the medium being explored. Our aim was therefore to find automatically the optimal "imaging" wave which maximized the contrast resolution without a priori information. To overcome assumption regarding the waveform, a genetic algorithm investigated the medium thanks to the transmission of stochastic "explorer" waves. Moreover, these stochastic signals could be constrained by the type of generator available (bipolar or arbitrary). To implement it, we changed the current pulse inversion imaging system by including feedback. Thus the method optimized the contrast resolution by adaptively selecting the samples of the excitation. In simulation, we benchmarked the contrast effectiveness of the best found transmitted stochastic commands and the usual fixed-frequency command. The optimization method converged quickly after around 300 iterations in the same optimal area. These results were confirmed experimentally. In the experimental case, the contrast resolution measured on a radiofrequency line could be improved by 6% with a bipolar generator and it could still increase by 15% with an arbitrary waveform generator. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  20. On decomposing stimulus and response waveforms in event-related potentials recordings.

    PubMed

    Yin, Gang; Zhang, Jun

    2011-06-01

    Event-related potentials (ERPs) reflect the brain activities related to specific behavioral events, and are obtained by averaging across many trial repetitions with individual trials aligned to the onset of a specific event, e.g., the onset of stimulus (s-aligned) or the onset of the behavioral response (r-aligned). However, the s-aligned and r-aligned ERP waveforms do not purely reflect, respectively, underlying stimulus (S-) or response (R-) component waveform, due to their cross-contaminations in the recorded ERP waveforms. Zhang [J. Neurosci. Methods, 80, pp. 49-63, 1998] proposed an algorithm to recover the pure S-component waveform and the pure R-component waveform from the s-aligned and r-aligned ERP average waveforms-however, due to the nature of this inverse problem, a direct solution is sensitive to noise that disproportionally affects low-frequency components, hindering the practical implementation of this algorithm. Here, we apply the Wiener deconvolution technique to deal with noise in input data, and investigate a Tikhonov regularization approach to obtain a stable solution that is robust against variances in the sampling of reaction-time distribution (when number of trials is low). Our method is demonstrated using data from a Go/NoGo experiment about image classification and recognition.

  1. Retrieving rupture history using waveform inversions in time sequence

    NASA Astrophysics Data System (ADS)

    Yi, L.; Xu, C.; Zhang, X.

    2017-12-01

    The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.

  2. ℓ1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  3. Some practical aspects of prestack waveform inversion using a genetic algorithm: An example from the east Texas Woodbine gas sand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallick, S.

    1999-03-01

    In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte-Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of thesemore » PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.« less

  4. A parallel algorithm for 2D visco-acoustic frequency-domain full-waveform inversion: application to a dense OBS data set

    NASA Astrophysics Data System (ADS)

    Sourbier, F.; Operto, S.; Virieux, J.

    2006-12-01

    We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor computes the corresponding sub-domain of the gradient. In the end, the gradient is centralized on the master processor using a collective communation. The gradient is scaled by the diagonal elements of the Hessian matrix. This scaling is computed only once per frequency before the first iteration of the inversion. Estimation of the diagonal terms of the Hessian requires performing one simulation per non redondant shot and receiver position. The same strategy that the one used for the gradient is used to compute the diagonal Hessian in parallel. This algorithm was applied to a dense wide-angle data set recorded by 100 OBSs in the eastern Nankai trough, offshore Japan. Thirteen frequencies ranging from 3 and 15 Hz were inverted. Tweny iterations per frequency were computed leading to 260 tomographic velocity models of increasing resolution. The velocity model dimensions are 105 km x 25 km corresponding to a finite-difference grid of 4201 x 1001 grid with a 25-m grid interval. The number of shot was 1005 and the number of inverted OBS gathers was 93. The inversion requires 20 days on 6 32-bits bi-processor nodes with 4 Gbytes of RAM memory per node when only the LU factorization is performed in parallel. Preliminary estimations of the time required to perform the inversion with the fully-parallelized code is 6 and 4 days using 20 and 50 processors respectively.

  5. Magnetic Resonance Elastography: Measurement of Hepatic Stiffness Using Different Direct Inverse Problem Reconstruction Methods in Healthy Volunteers and Patients with Liver Disease.

    PubMed

    Saito, Shigeyoshi; Tanaka, Keiko; Hashido, Takashi

    2016-02-01

    The purpose of this study was to compare the mean hepatic stiffness values obtained by the application of two different direct inverse problem reconstruction methods to magnetic resonance elastography (MRE). Thirteen healthy men (23.2±2.1 years) and 16 patients with liver diseases (78.9±4.3 years; 12 men and 4 women) were examined for this study using a 3.0 T-MRI. The healthy volunteers underwent three consecutive scans, two 70-Hz waveform and a 50-Hz waveform scans. On the other hand, the patients with liver disease underwent scanning using the 70-Hz waveform only. The MRE data for each subject was processed twice for calculation of the mean hepatic stiffness (Pa), once using the multiscale direct inversion (MSDI) and once using the multimodel direct inversion (MMDI). There were no significant differences in the mean stiffness values among the scans obtained with two 70-Hz and different waveforms. However, the mean stiffness values obtained with the MSDI technique (with mask: 2895.3±255.8 Pa, without mask: 2940.6±265.4 Pa) were larger than those obtained with the MMDI technique (with mask: 2614.0±242.1 Pa, without mask: 2699.2±273.5 Pa). The reproducibility of measurements obtained using the two techniques was high for both the healthy volunteers [intraclass correlation coefficients (ICCs): 0.840-0.953] and the patients (ICC: 0.830-0.995). These results suggest that knowledge of the characteristics of different direct inversion algorithms is important for longitudinal liver stiffness assessments such as the comparison of different scanners and evaluation of the response to fibrosis therapy.

  6. Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yunsong; Schuster, Gerard T.

    The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion.

  7. Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data

    DOE PAGES

    Huang, Yunsong; Schuster, Gerard T.

    2017-10-26

    The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion.

  8. Including Short Period Constraints In the Construction of Full Waveform Tomographic Models

    NASA Astrophysics Data System (ADS)

    Roy, C.; Calo, M.; Bodin, T.; Romanowicz, B. A.

    2015-12-01

    Thanks to the introduction of the Spectral Element Method (SEM) in seismology, which allows accurate computation of the seismic wavefield in complex media, the resolution of regional and global tomographic models has improved in recent years. However, due to computational costs, only long period waveforms are considered, and only long wavelength structure can be constrained. Thus, the resulting 3D models are smooth, and only represent a small volumetric perturbation around a smooth reference model that does not include upper-mantle discontinuities (e.g. MLD, LAB). Extending the computations to shorter periods, necessary for the resolution of smaller scale features, is computationally challenging. In order to overcome these limitations and to account for layered structure in the upper mantle in our full waveform tomography, we include information provided by short period seismic observables (receiver functions and surface wave dispersion), sensitive to sharp boundaries and anisotropic structure respectively. In a first step, receiver functions and dispersion curves are used to generate a number of 1D radially anisotropic shear velocity profiles using a trans-dimensional Markov-chain Monte Carlo (MCMC) algorithm. These 1D profiles include both isotropic and anisotropic discontinuities in the upper mantle (above 300 km depth) beneath selected stationsand are then used to build a 3D starting model for the full waveform tomographic inversion. This model is built after 1) interpolation between the available 1D profiles, and 2) homogeneization of the layered 1D models to obtain an equivalent smooth 3D starting model in the period range of interest for waveform inversion. The waveforms used in the inversion are collected for paths contained in the region of study and filtered at periods longer than 40s. We use the spectral element code "RegSEM" (Cupillard et al., 2012) for forward computations and a quasi-Newton inversion approach in which kernels are computed using normal mode perturbation theory. We present here the first reults of such an approach after successive iterations of a full waveform tomography of the North American continent.

  9. Full Waveform Inversion for Seismic Velocity And Anelastic Losses in Heterogeneous Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Askan, A.; /Carnegie Mellon U.; Akcelik, V.

    2009-04-30

    We present a least-squares optimization method for solving the nonlinear full waveform inverse problem of determining the crustal velocity and intrinsic attenuation properties of sedimentary valleys in earthquake-prone regions. Given a known earthquake source and a set of seismograms generated by the source, the inverse problem is to reconstruct the anelastic properties of a heterogeneous medium with possibly discontinuous wave velocities. The inverse problem is formulated as a constrained optimization problem, where the constraints are the partial and ordinary differential equations governing the anelastic wave propagation from the source to the receivers in the time domain. This leads to amore » variational formulation in terms of the material model plus the state variables and their adjoints. We employ a wave propagation model in which the intrinsic energy-dissipating nature of the soil medium is modeled by a set of standard linear solids. The least-squares optimization approach to inverse wave propagation presents the well-known difficulties of ill posedness and multiple minima. To overcome ill posedness, we include a total variation regularization functional in the objective function, which annihilates highly oscillatory material property components while preserving discontinuities in the medium. To treat multiple minima, we use a multilevel algorithm that solves a sequence of subproblems on increasingly finer grids with increasingly higher frequency source components to remain within the basin of attraction of the global minimum. We illustrate the methodology with high-resolution inversions for two-dimensional sedimentary models of the San Fernando Valley, under SH-wave excitation. We perform inversions for both the seismic velocity and the intrinsic attenuation using synthetic waveforms at the observer locations as pseudoobserved data.« less

  10. Simultaneous inversion of seismic velocity and moment tensor using elastic-waveform inversion of microseismic data: Application to the Aneth CO2-EOR field

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Huang, L.

    2017-12-01

    Moment tensors are key parameters for characterizing CO2-injection-induced microseismic events. Elastic-waveform inversion has the potential to providing accurate results of moment tensors. Microseismic waveforms contains information of source moment tensors and the wave propagation velocity along the wavepaths. We develop an elastic-waveform inversion method to jointly invert the seismic velocity model and moment tensor. We first use our adaptive moment-tensor joint inversion method to estimate moment tensors of microseismic events. Our adaptive moment-tensor inversion method jointly inverts multiple microseismic events with similar waveforms within a cluster to reduce inversion uncertainty for microseismic data recorded using a single borehole geophone array. We use this inversion result as the initial model for our elastic-waveform inversion to minimize the cross-correlated-based data misfit between observed data and synthetic data. We verify our method using synthetic microseismic data and obtain improved results of both moment tensors and seismic velocity model. We apply our new inversion method to microseismic data acquired at a CO2-enhanced oil recovery field in Aneth, Utah, using a single borehole geophone array. The results demonstrate that our new inversion method significantly reduces the data misfit compared to the conventional ray-theory-based moment-tensor inversion.

  11. 3-D characterization of high-permeability zones in a gravel aquifer using 2-D crosshole GPR full-waveform inversion and waveguide detection

    NASA Astrophysics Data System (ADS)

    Klotzsche, Anja; van der Kruk, Jan; Linde, Niklas; Doetsch, Joseph; Vereecken, Harry

    2013-11-01

    Reliable high-resolution 3-D characterization of aquifers helps to improve our understanding of flow and transport processes when small-scale structures have a strong influence. Crosshole ground penetrating radar (GPR) is a powerful tool for characterizing aquifers due to the method's high-resolution and sensitivity to porosity and soil water content. Recently, a novel GPR full-waveform inversion algorithm was introduced, which is here applied and used for 3-D characterization by inverting six crosshole GPR cross-sections collected between four wells arranged in a square configuration close to the Thur River in Switzerland. The inversion results in the saturated part of this gravel aquifer reveals a significant improvement in resolution for the dielectric permittivity and electrical conductivity images compared to ray-based methods. Consistent structures where acquisition planes intersect indicate the robustness of the inversion process. A decimetre-scale layer with high dielectric permittivity was revealed at a depth of 5-6 m in all six cross-sections analysed here, and a less prominent zone with high dielectric permittivity was found at a depth of 7.5-9 m. These high-permittivity layers act as low-velocity waveguides and they are interpreted as high-porosity layers and possible zones of preferential flow. Porosity estimates from the permittivity models agree well with estimates from Neutron-Neutron logging data at the intersecting diagonal planes. Moreover, estimates of hydraulic permeability based on flowmeter logs confirm the presence of zones of preferential flow in these depth intervals. A detailed analysis of the measured data for transmitters located within the waveguides, revealed increased trace energy due to late-arrival elongated wave trains, which were observed for receiver positions straddling this zone. For the same receiver positions within the waveguide, a distinct minimum in the trace energy was visible when the transmitter was located outside the waveguide. A novel amplitude analysis was proposed to explore these maxima and minima of the trace energy. Laterally continuous low-velocity waveguides and their boundaries were identified in the measured data alone. In contrast to the full-waveform inversion, this method follows a simple workflow and needs no detailed and time consuming processing or inversion of the data. Comparison with the full-waveform inversion results confirmed the presence of the waveguides illustrating that full-waveform inversion return reliable results at the highest resolution currently possible at these scales. We envision that full-waveform inversion of GPR data will play an important role in a wide range of geological, hydrological, glacial and periglacial studies in the critical zone.

  12. The laboratory demonstration and signal processing of the inverse synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Gao, Si; Zhang, ZengHui; Xu, XianWen; Yu, WenXian

    2017-10-01

    This paper presents a coherent inverse synthetic-aperture imaging ladar(ISAL)system to obtain high resolution images. A balanced coherent optics system in laboratory is built with binary phase coded modulation transmit waveform which is different from conventional chirp. A whole digital signal processing solution is proposed including both quality phase gradient autofocus(QPGA) algorithm and cubic phase function(CPF) algorithm. Some high-resolution well-focused ISAL images of retro-reflecting targets are shown to validate the concepts. It is shown that high resolution images can be achieved and the influences from vibrations of platform involving targets and radar can be automatically compensated by the distinctive laboratory system and digital signal process.

  13. Application of Carbonate Reservoir using waveform inversion and reverse-time migration methods

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kim, H.; Min, D.; Keehm, Y.

    2011-12-01

    Recent exploration targets of oil and gas resources are deeper and more complicated subsurface structures, and carbonate reservoirs have become one of the attractive and challenging targets in seismic exploration. To increase the rate of success in oil and gas exploration, it is required to delineate detailed subsurface structures. Accordingly, migration method is more important factor in seismic data processing for the delineation. Seismic migration method has a long history, and there have been developed lots of migration techniques. Among them, reverse-time migration is promising, because it can provide reliable images for the complicated model even in the case of significant velocity contrasts in the model. The reliability of seismic migration images is dependent on the subsurface velocity models, which can be extracted in several ways. These days, geophysicists try to obtain velocity models through seismic full waveform inversion. Since Lailly (1983) and Tarantola (1984) proposed that the adjoint state of wave equations can be used in waveform inversion, the back-propagation techniques used in reverse-time migration have been used in waveform inversion, which accelerated the development of waveform inversion. In this study, we applied acoustic waveform inversion and reverse-time migration methods to carbonate reservoir models with various reservoir thicknesses to examine the feasibility of the methods in delineating carbonate reservoir models. We first extracted subsurface material properties from acoustic waveform inversion, and then applied reverse-time migration using the inverted velocities as a background model. The waveform inversion in this study used back-propagation technique, and conjugate gradient method was used in optimization. The inversion was performed using the frequency-selection strategy. Finally waveform inversion results showed that carbonate reservoir models are clearly inverted by waveform inversion and migration images based on the inversion results are quite reliable. Different thicknesses of reservoir models were also described and the results revealed that the lower boundary of the reservoir was not delineated because of energy loss. From these results, it was noted that carbonate reservoirs can be properly imaged and interpreted by waveform inversion and reverse-time migration methods. This work was supported by the Energy Resources R&D program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2009201030001A, No. 2010T100200133) and the Brain Korea 21 project of Energy System Engineering.

  14. Long-period GPS waveforms. What can GPS bring to Earth seismic velocity models?

    NASA Astrophysics Data System (ADS)

    Kelevitz, Krisztina; Houlié, Nicolas; Boschi, Lapo; Nissen-Meyer, Tarje; Giardini, Domenico

    2014-05-01

    It is now commonly admitted that high rate GPS observations can provide reliable surface displacement waveforms (Cervelli, et al., 2001; Langbein, et al., 2006; Houlié, et al., 2006; Houlié et al., 2011). For long-period (T>5s) transients, it was shown that GPS and seismometer (STS-1) displacements are in agreement at least for vertical component (Houlié, et al., Sci. Rep. 2011). We propose here to supplement existing long-period seismic networks with high rate (>= 1Hz) GPS data in order to improve the resolution of global seismic velocity models. GPS measurements are providing a wide range of frequencies, going beyond the range of STS-1 in the low frequency end. Nowadays, almost 10.000 GPS receivers would be able to record data at 1 Hz with 3000+ stations already streaming data in Real-Time (RT). The reasons for this quick expansion are the price of receivers, their low maintenance, and the wide range of activities they can be used for (transport, science, public apps, navigation, etc.). We are presenting work completed on the 1Hz GPS records of the Hokkaido earthquake (25th of September, 2003, Mw=8.3). 3D Waveforms have been computed with an improved, stabilised inversion algorithm in order to constrain the ground motion history. Through the better resolution of inversion of the GPS phase observations, we determine displacement waveforms of frequencies ranging from 0.77 mHz to 330 mHz for a selection of sites. We compare inverted GPS waveforms with STS-1 waveforms and synthetic waveforms computed using 3D global wave propagation with SPECFEM. At co-located sites (STS-1 and GPS located within 10km) the agreement is good for the vertical component between seismic (both real and synthetic) and GPS waveforms.

  15. Moment Inversion of the DPRK Nuclear Tests Using Finite-Difference Three-dimensional Strain Green's Tensors

    NASA Astrophysics Data System (ADS)

    Bao, X.; Shen, Y.; Wang, N.

    2017-12-01

    Accurate estimation of the source moment is important for discriminating underground explosions from earthquakes and other seismic sources. In this study, we invert for the full moment tensors of the recent seismic events (since 2016) at the Democratic People's Republic of Korea (PRRK) Punggye-ri test site. We use waveform data from broadband seismic stations located in China, Korea, and Japan in the inversion. Using a non-staggered-grid, finite-difference algorithm, we calculate the strain Green's tensors (SGT) based on one-dimensional (1D) and three-dimensional (3D) Earth models. Taking advantage of the source-receiver reciprocity, a SGT database pre-calculated and stored for the Punggye-ri test site is used in inversion for the source mechanism of each event. With the source locations estimated from cross-correlation using regional Pn and Pn-coda waveforms, we obtain the optimal source mechanism that best fits synthetics to the observed waveforms of both body and surface waves. The moment solutions of the first three events (2016-01-06, 2016-09-09, and 2017-09-03) show dominant isotropic components, as expected from explosions, though there are also notable non-isotropic components. The last event ( 8 minutes after the mb6.3 explosion in 2017) contained mainly implosive component, suggesting a collapse following the explosion. The solutions from the 3D model can better fit observed waveforms than the corresponding solutions from the 1D model. The uncertainty in the resulting moment solution is influenced by heterogeneities not resolved by the Earth model according to the waveform misfit. Using the moment solutions, we predict the peak ground acceleration at the Punggye-ri test site and compare the prediction with corresponding InSAR and other satellite images.

  16. Real-time digital signal recovery for a multi-pole low-pass transfer function system.

    PubMed

    Lee, Jhinhwan

    2017-08-01

    In order to solve the problems of waveform distortion and signal delay by many physical and electrical systems with multi-pole linear low-pass transfer characteristics, a simple digital-signal-processing (DSP)-based method of real-time recovery of the original source waveform from the distorted output waveform is proposed. A mathematical analysis on the convolution kernel representation of the single-pole low-pass transfer function shows that the original source waveform can be accurately recovered in real time using a particular moving average algorithm applied on the input stream of the distorted waveform, which can also significantly reduce the overall delay time constant. This method is generalized for multi-pole low-pass systems and has noise characteristics of the inverse of the low-pass filter characteristics. This method can be applied to most sensors and amplifiers operating close to their frequency response limits to improve the overall performance of data acquisition systems and digital feedback control systems.

  17. Crustal Stress and Strain Distribution in Sicily (Southern Italy) from Joint Analysis of Seismicity and Geodetic Data

    NASA Astrophysics Data System (ADS)

    Presti, D.; Neri, G.; Aloisi, M.; Cannavo, F.; Orecchio, B.; Palano, M.; Siligato, G.; Totaro, C.

    2014-12-01

    An updated database of earthquake focal mechanisms is compiled for the Sicilian region (southern Italy) and surrounding off-shore areas where the Nubia-Eurasia convergence coexists with the very-slow residual rollback of the Ionian subducting slab. High-quality solutions selected from literature and catalogs have been integrated with new solutions estimated in the present work using the Cut And Paste (CAP) waveform inversion method. In the CAP algorithm (Zhao and Helmberger, 1994; Zhu and Helmberger, 1996), each waveform is broken up into Pnl and surface wave segments, which are weighted differently during the inversion procedure. Integration of the new solutions with the ones selected from literature and official catalogs led us to collect a database consisting exclusively of waveform inversion data relative to earthquakes with minimum magnitude 2.6. The seismicity and focal mechanism distributions have been compared with crustal motion and strain data coming from GNSS analyses. For this purpose GNSS-based observations collected over the investigated area by episodic measurements (1994-2013) as well as continuous monitoring (since 2006) were processed by the GAMIT/GLOBK software packages (Herring et al., 2010) following the approach described in Palano et al. (2011). To adequately investigate the crustal deformation pattern, the estimated GNSS velocities were aligned to a fixed Eurasian reference frame. The good agreement found between seismic and geodetic information contributes to better define seismotectonic domains characterized by different kinematics. Moving from the available geophysical information and from an early application of FEM algorithms, we have also started to investigate stress/strain fields in the crust of the study area including depth dependence and relationships with rupture of the main seismogenic structures.

  18. Sensitivity analyses of acoustic impedance inversion with full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Yao, Gang; da Silva, Nuno V.; Wu, Di

    2018-04-01

    Acoustic impedance estimation has a significant importance to seismic exploration. In this paper, we use full-waveform inversion to recover the impedance from seismic data, and analyze the sensitivity of the acoustic impedance with respect to the source-receiver offset of seismic data and to the initial velocity model. We parameterize the acoustic wave equation with velocity and impedance, and demonstrate three key aspects of acoustic impedance inversion. First, short-offset data are most suitable for acoustic impedance inversion. Second, acoustic impedance inversion is more compatible with the data generated by density contrasts than velocity contrasts. Finally, acoustic impedance inversion requires the starting velocity model to be very accurate for achieving a high-quality inversion. Based upon these observations, we propose a workflow for acoustic impedance inversion as: (1) building a background velocity model with travel-time tomography or reflection waveform inversion; (2) recovering the intermediate wavelength components of the velocity model with full-waveform inversion constrained by Gardner’s relation; (3) inverting the high-resolution acoustic impedance model with short-offset data through full-waveform inversion. We verify this workflow by the synthetic tests based on the Marmousi model.

  19. Anisotropy effects on 3D waveform inversion

    NASA Astrophysics Data System (ADS)

    Stekl, I.; Warner, M.; Umpleby, A.

    2010-12-01

    In the recent years 3D waveform inversion has become achievable procedure for seismic data processing. A number of datasets has been inverted and presented (Warner el al 2008, Ben Hadj at all, Sirgue et all 2010) using isotropic 3D waveform inversion. However the question arises will the results be affected by isotropic assumption. Full-wavefield inversion techniques seek to match field data, wiggle-for-wiggle, to synthetic data generated by a high-resolution model of the sub-surface. In this endeavour, correctly matching the travel times of the principal arrivals is a necessary minimal requirement. In many, perhaps most, long-offset and wide-azimuth datasets, it is necessary to introduce some form of p-wave velocity anisotropy to match the travel times successfully. If this anisotropy is not also incorporated into the wavefield inversion, then results from the inversion will necessarily be compromised. We have incorporated anisotropy into our 3D wavefield tomography codes, characterised as spatially varying transverse isotropy with a tilted axis of symmetry - TTI anisotropy. This enhancement approximately doubles both the run time and the memory requirements of the code. We show that neglect of anisotropy can lead to significant artefacts in the recovered velocity models. We will present inversion results of inverting anisotropic 3D dataset by assuming isotropic earth and compare them with anisotropic inversion result. As a test case Marmousi model extended to 3D with no velocity variation in third direction and with added spatially varying anisotropy is used. Acquisition geometry is assumed as OBC with sources and receivers everywhere at the surface. We attempted inversion using both 2D and full 3D acquisition for this dataset. Results show that if no anisotropy is taken into account although image looks plausible most features are miss positioned in depth and space, even for relatively low anisotropy, which leads to incorrect result. This may lead to misinterpretation of results. However if correct physics is used results agree with correct model. Our algorithm is relatively affordable and runs on standard pc clusters in acceptable time. Refferences: H. Ben Hadj Ali, S. Operto and J. Virieux. Velocity model building by 3D frequency-domain full-waveform inversion of wide-aperture seismic data, Geophysics (Special issue: Velocity Model Building), 73(6), P. VE101-VE117 (2008). L. Sirgue, O.I. Barkved, J. Dellinger, J. Etgen, U. Albertin, J.H. Kommedal, Full waveform inversion: the next leap forward in imaging at Valhall, First Brake April 2010 - Issue 4 - Volume 28 M. Warner, I. Stekl, A. Umpleby, Efficient and Effective 3D Wavefield Tomography, 70th EAGE Conference & Exhibition (2008)

  20. Probabilistic source mechanism estimation based on body-wave waveforms through shift and stack algorithm

    NASA Astrophysics Data System (ADS)

    Massin, F.; Malcolm, A. E.

    2017-12-01

    Knowing earthquake source mechanisms gives valuable information for earthquake response planning and hazard mitigation. Earthquake source mechanisms can be analyzed using long period waveform inversion (for moderate size sources with sufficient signal to noise ratio) and body-wave first motion polarity or amplitude ratio inversion (for micro-earthquakes with sufficient data coverage). A robust approach that gives both source mechanisms and their associated probabilities across all source scales would greatly simplify the determination of source mechanisms and allow for more consistent interpretations of the results. Following previous work on shift and stack approaches, we develop such a probabilistic source mechanism analysis, using waveforms, which does not require polarity picking. For a given source mechanism, the first period of the observed body-waves is selected for all stations, multiplied by their corresponding theoretical polarity and stacked together. (The first period is found from a manually picked travel time by measuring the central period where the signal power is concentrated, using the second moment of the power spectral density function.) As in other shift and stack approaches, our method is not based on the optimization of an objective function through an inversion. Instead, the power of the polarity-corrected stack is a proxy for the likelihood of the trial source mechanism, with the most powerful stack corresponding to the most likely source mechanism. Using synthetic data, we test our method for robustness to the data coverage, coverage gap, signal to noise ratio, travel-time picking errors and non-double couple component. We then present results for field data in a volcano-tectonic context. Our results are reliable when constrained by 15 body-wavelets, with gap below 150 degrees, signal to noise ratio over 1 and arrival time error below a fifth of the period (0.2T) of the body-wave. We demonstrate that the source scanning approach for source mechanism analysis has similar advantages to waveform inversion (full waveform data, no manual intervention, probabilistic approach) and similar applicability to polarity inversion (any source size, any instrument type).

  1. Bessel smoothing filter for spectral-element mesh

    NASA Astrophysics Data System (ADS)

    Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.

    2017-06-01

    Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the efficiency and flexibility of the approach proposed.

  2. Multicomponent pre-stack seismic waveform inversion in transversely isotropic media using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Padhi, Amit; Mallick, Subhashis

    2014-03-01

    Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.

  3. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan [Comparison of eruption masses at Sakurajima Volcano, Japan calculated by infrasound waveform inversion and ground-based sampling

    DOE PAGES

    Fee, David; Izbekov, Pavel; Kim, Keehoon; ...

    2017-10-09

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less

  4. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan [Comparison of eruption masses at Sakurajima Volcano, Japan calculated by infrasound waveform inversion and ground-based sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fee, David; Izbekov, Pavel; Kim, Keehoon

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less

  5. Frequency Domain Full-Waveform Inversion in Imaging Thrust Related Features

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; Zelt, C. A.

    2010-12-01

    Seismic acquisition in rough terrain such as mountain belts suffers from problems related to near-surface conditions such as statics, inconsistent energy penetration, rapid decay of signal, and imperfect receiver coupling. Moreover in the presence of weakly compacted soil, strong ground roll may obscure the reflection arrivals at near offsets further diminishing the scope of estimating a reliable near surface image though conventional processing. Traveltime and waveform inversion not only overcome the simplistic assumptions inherent in conventional processing such as hyperbolic moveout and convolution model, but also use parts of the seismic coda, such as the direct arrival and refractions, that are discarded in the latter. Traveltime and waveform inversion are model-based methods that honour the physics of wave propagation. Given the right set of preconditioned data and starting model, waveform inversion in particular has been realized as a powerful tool for velocity model building. This paper examines two case studies on waveform inversion using real data from the Naga Thrust Belt in the Northeast India. Waveform inversion in this paper is performed in the frequency domain and is multiscale in nature i.e., the inversion progressively ascends from the lower to the higher end of the frequency spectra increasing the wavenumber content of the recovered model. Since the real data are band limited, the success of waveform inversion depends on how well the starting model can account for the missing low wavenumbers. In this paper it is observed that the required starting model can be prepared using the regularized inversion of direct and reflected arrival times.

  6. Estimation of the gravitational wave polarizations from a nontemplate search

    NASA Astrophysics Data System (ADS)

    Di Palma, Irene; Drago, Marco

    2018-01-01

    Gravitational wave astronomy is just beginning, after the recent success of the four direct detections of binary black hole (BBH) mergers and the first observation from a binary neutron star inspiral, with the expectation of many more events to come. Given the possibility to detect waves from not exactly modeled astrophysical processes, it is fundamental to be ready to calculate the polarization waveforms in the case of searches using nontemplate algorithms. In such a case, the waveform polarizations are the only quantities that contain direct information about the generating process. We present the performance of a new valuable tool to estimate the inverse solution of gravitational wave transient signals, starting from the analysis of the signal properties of a nontemplate algorithm that is open to a wider class of gravitational signals not covered by template algorithms. We highlight the contributions to the wave polarization associated with the detector response, the sky localization, and the polarization angle of the source. In this paper we present the performances of such a method and its implications by using two main classes of transient signals, resembling the limiting case for most simple and complicated morphologies. The performances are encouraging for the tested waveforms: the correlation between the original and the reconstructed waveforms spans from better than 80% for simple morphologies to better than 50% for complicated ones. For a nontemplate search these results can be considered satisfactory to reconstruct the astrophysical progenitor.

  7. Joint design of large-tip-angle parallel RF pulses and blipped gradient trajectories.

    PubMed

    Cao, Zhipeng; Donahue, Manus J; Ma, Jun; Grissom, William A

    2016-03-01

    To design multichannel large-tip-angle kT-points and spokes radiofrequency (RF) pulses and gradient waveforms for transmit field inhomogeneity compensation in high field magnetic resonance imaging. An algorithm to design RF subpulse weights and gradient blip areas is proposed to minimize a magnitude least-squares cost function that measures the difference between realized and desired state parameters in the spin domain, and penalizes integrated RF power. The minimization problem is solved iteratively with interleaved target phase updates, RF subpulse weights updates using the conjugate gradient method with optimal control-based derivatives, and gradient blip area updates using the conjugate gradient method. Two-channel parallel transmit simulations and experiments were conducted in phantoms and human subjects at 7 T to demonstrate the method and compare it to small-tip-angle-designed pulses and circularly polarized excitations. The proposed algorithm designed more homogeneous and accurate 180° inversion and refocusing pulses than other methods. It also designed large-tip-angle pulses on multiple frequency bands with independent and joint phase relaxation. Pulses designed by the method improved specificity and contrast-to-noise ratio in a finger-tapping spin echo blood oxygen level dependent functional magnetic resonance imaging study, compared with circularly polarized mode refocusing. A joint RF and gradient waveform design algorithm was proposed and validated to improve large-tip-angle inversion and refocusing at ultrahigh field. © 2015 Wiley Periodicals, Inc.

  8. Seismotectonics of the Eastern Himalayan System and Indo-Burman Convergence Zone Using Seismic Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Mitra, S.; Suresh, G.

    2014-12-01

    The Eastern Himalayan System (east of 88°E) is distinct from the rest of the India-Eurasia continental collision, due to a wider zone of distributed deformation, oblique convergence across two orthogonal plate boundaries and near absence of foreland basin sedimentary strata. To understand the seismotectonics of this region we study the spatial distribution and source mechanism of earthquakes originating within Eastern Himalaya, northeast India and Indo-Burman Convergence Zone (IBCZ). We compute focal mechanism of 32 moderate-to-large earthquakes (mb >=5.4) by modeling teleseismic P- and SH-waveforms, from GDSN stations, using least-squares inversion algorithm; and 7 small-to-moderate earthquakes (3.5<= mb <5.4) by modeling local P- and S-waveforms, from the NorthEast India Telemetered Network, using non-linear grid search algorithm. We also include source mechanisms from previous studies, either computed by waveform inversion or by first motion polarity from analog data. Depth distribution of modeled earthquakes reveal that the seismogenic layer beneath northeast India is ~45km thick. From source mechanisms we observe that moderate earthquakes in northeast India are spatially clustered in five zones with distinct mechanisms: (a) thrust earthquakes within the Eastern Himalayan wedge, on north dipping low angle faults; (b) thrust earthquakes along the northern edge of Shillong Plateau, on high angle south dipping fault; (c) dextral strike-slip earthquakes along Kopili fault zone, between Shillong Plateau and Mikir Hills, extending southeast beneath Naga Fold belts; (d) dextral strike-slip earthquakes within Bengal Basin, immediately south of Shillong Plateau; and (e) deep focus (>50 km) thrust earthquakes within IBCZ. Combining with GPS geodetic observations, it is evident that the N20E convergence between India and Tibet is accommodated as elastic strain both within eastern Himalaya and regions surrounding the Shillong Plateau. We hypothesize that the strike-slip earthquakes south of the Plateau occur on re-activated continental rifts paralleling the Eocene hinge zone. Distribution of earthquake hypocenters across the IBCZ reveal active subduction of the Indian plate beneath Burma micro-plate.

  9. Application of genetic algorithms to focal mechanism determination

    NASA Astrophysics Data System (ADS)

    Kobayashi, Reiji; Nakanishi, Ichiro

    1994-04-01

    Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.

  10. Time-domain full waveform inversion using instantaneous phase information with damping

    NASA Astrophysics Data System (ADS)

    Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun

    2018-06-01

    In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.

  11. Approximate solutions of acoustic 3D integral equation and their application to seismic modeling and full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2017-10-01

    Over the recent decades, a number of fast approximate solutions of Lippmann-Schwinger equation, which are more accurate than classic Born and Rytov approximations, were proposed in the field of electromagnetic modeling. Those developments could be naturally extended to acoustic and elastic fields; however, until recently, they were almost unknown in seismology. This paper presents several solutions of this kind applied to acoustic modeling for both lossy and lossless media. We evaluated the numerical merits of those methods and provide an estimation of their numerical complexity. In our numerical realization we use the matrix-free implementation of the corresponding integral operator. We study the accuracy of those approximate solutions and demonstrate, that the quasi-analytical approximation is more accurate, than the Born approximation. Further, we apply the quasi-analytical approximation to the solution of the inverse problem. It is demonstrated that, this approach improves the estimation of the data gradient, comparing to the Born approximation. The developed inversion algorithm is based on the conjugate-gradient type optimization. Numerical model study demonstrates that the quasi-analytical solution significantly reduces computation time of the seismic full-waveform inversion. We also show how the quasi-analytical approximation can be extended to the case of elastic wavefield.

  12. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  13. Effect of surface-related Rayleigh and multiple waves on velocity reconstruction with time-domain elastic FWI

    NASA Astrophysics Data System (ADS)

    Fang, Jinwei; Zhou, Hui; Zhang, Qingchen; Chen, Hanming; Wang, Ning; Sun, Pengyuan; Wang, Shucheng

    2018-01-01

    It is critically important to assess the effectiveness of elastic full waveform inversion (FWI) algorithms when FWI is applied to real land seismic data including strong surface and multiple waves related to the air-earth boundary. In this paper, we review the realization of the free surface boundary condition in staggered-grid finite-difference (FD) discretization of elastic wave equation, and analyze the impact of the free surface on FWI results. To reduce inputs/outputs (I/O) operations in gradient calculation, we adopt the boundary value reconstruction method to rebuild the source wavefields during the backward propagation of the residual data. A time-domain multiscale inversion strategy is conducted by using a convolutional objective function, and a multi-GPU parallel programming technique is used to accelerate our elastic FWI further. Forward simulation and elastic FWI examples without and with considering the free surface are shown and analyzed, respectively. Numerical results indicate that no free surface incorporated elastic FWI fails to recover a good inversion result from the Rayleigh wave contaminated observed data. By contrast, when the free surface is incorporated into FWI, the inversion results become better. We also discuss the dependency of the Rayleigh waveform incorporated FWI on the accuracy of initial models, especially the accuracy of the shallow part of the initial models.

  14. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2017-12-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  15. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2018-02-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  16. 2-D traveltime and waveform inversion for improved seismic imaging: Naga Thrust and Fold Belt, India

    NASA Astrophysics Data System (ADS)

    Jaiswal, Priyank; Zelt, Colin A.; Bally, Albert W.; Dasgupta, Rahul

    2008-05-01

    Exploration along the Naga Thrust and Fold Belt in the Assam province of Northeast India encounters geological as well as logistic challenges. Drilling for hydrocarbons, traditionally guided by surface manifestations of the Naga thrust fault, faces additional challenges in the northeast where the thrust fault gradually deepens leaving subtle surface expressions. In such an area, multichannel 2-D seismic data were collected along a line perpendicular to the trend of the thrust belt. The data have a moderate signal-to-noise ratio and suffer from ground roll and other acquisition-related noise. In addition to data quality, the complex geology of the thrust belt limits the ability of conventional seismic processing to yield a reliable velocity model which in turn leads to poor subsurface image. In this paper, we demonstrate the application of traveltime and waveform inversion as supplements to conventional seismic imaging and interpretation processes. Both traveltime and waveform inversion utilize the first arrivals that are typically discarded during conventional seismic processing. As a first step, a smooth velocity model with long wavelength characteristics of the subsurface is estimated through inversion of the first-arrival traveltimes. This velocity model is then used to obtain a Kirchhoff pre-stack depth-migrated image which in turn is used for the interpretation of the fault. Waveform inversion is applied to the central part of the seismic line to a depth of ~1 km where the quality of the migrated image is poor. Waveform inversion is performed in the frequency domain over a series of iterations, proceeding from low to high frequency (11-19 Hz) using the velocity model from traveltime inversion as the starting model. In the end, the pre-stack depth-migrated image and the waveform inversion model are jointly interpreted. This study demonstrates that a combination of traveltime and waveform inversion with Kirchhoff pre-stack depth migration is a promising approach for the interpretation of geological structures in a thrust belt.

  17. Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen

    2017-04-01

    Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.

  18. Line-source simulation for shallow-seismic data. Part 2: full-waveform inversion—a synthetic 2-D case study

    NASA Astrophysics Data System (ADS)

    Schäfer, M.; Groos, L.; Forbriger, T.; Bohlen, T.

    2014-09-01

    Full-waveform inversion (FWI) of shallow-seismic surface waves is able to reconstruct lateral variations of subsurface elastic properties. Line-source simulation for point-source data is required when applying algorithms of 2-D adjoint FWI to recorded shallow-seismic field data. The equivalent line-source response for point-source data can be obtained by convolving the waveforms with √{t^{-1}} (t: traveltime), which produces a phase shift of π/4. Subsequently an amplitude correction must be applied. In this work we recommend to scale the seismograms with √{2 r v_ph} at small receiver offsets r, where vph is the phase velocity, and gradually shift to applying a √{t^{-1}} time-domain taper and scaling the waveforms with r√{2} for larger receiver offsets r. We call this the hybrid transformation which is adapted for direct body and Rayleigh waves and demonstrate its outstanding performance on a 2-D heterogeneous structure. The fit of the phases as well as the amplitudes for all shot locations and components (vertical and radial) is excellent with respect to the reference line-source data. An approach for 1-D media based on Fourier-Bessel integral transformation generates strong artefacts for waves produced by 2-D structures. The theoretical background for both approaches is presented in a companion contribution. In the current contribution we study their performance when applied to waves propagating in a significantly 2-D-heterogeneous structure. We calculate synthetic seismograms for 2-D structure for line sources as well as point sources. Line-source simulations obtained from the point-source seismograms through different approaches are then compared to the corresponding line-source reference waveforms. Although being derived by approximation the hybrid transformation performs excellently except for explicitly back-scattered waves. In reconstruction tests we further invert point-source synthetic seismograms by a 2-D FWI to subsurface structure and evaluate its ability to reproduce the original structural model in comparison to the inversion of line-source synthetic data. Even when applying no explicit correction to the point-source waveforms prior to inversion only moderate artefacts appear in the results. However, the overall performance is best in terms of model reproduction and ability to reproduce the original data in a 3-D simulation if inverted waveforms are obtained by the hybrid transformation.

  19. Exploratory Development for a High Reliability Flaw Characterization Module.

    DTIC Science & Technology

    1985-03-01

    deconvolution), and displaying the waveforms and the complex Fourier spectra (magnitude and phase or real and imaginary parts) on hard copies. The Born...shifted, and put into the Born inver- sion algorithm. Hard copies of the Born inversion results of the type dis- played in Figure 6 were obtained for each...nickel alloys than in titanium alloys because melt practice is not yet sufficiently developed to prevent the introduction of voids and hard oxide

  20. Seismic tomography of the southern California crust based on spectral-element and adjoint methods

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Maggi, Alessia; Tromp, Jeroen

    2010-01-01

    We iteratively improve a 3-D tomographic model of the southern California crust using numerical simulations of seismic wave propagation based on a spectral-element method (SEM) in combination with an adjoint method. The initial 3-D model is provided by the Southern California Earthquake Center. The data set comprises three-component seismic waveforms (i.e. both body and surface waves), filtered over the period range 2-30 s, from 143 local earthquakes recorded by a network of 203 stations. Time windows for measurements are automatically selected by the FLEXWIN algorithm. The misfit function in the tomographic inversion is based on frequency-dependent multitaper traveltime differences. The gradient of the misfit function and related finite-frequency sensitivity kernels for each earthquake are computed using an adjoint technique. The kernels are combined using a source subspace projection method to compute a model update at each iteration of a gradient-based minimization algorithm. The inversion involved 16 iterations, which required 6800 wavefield simulations. The new crustal model, m16, is described in terms of independent shear (VS) and bulk-sound (VB) wave speed variations. It exhibits strong heterogeneity, including local changes of +/-30 per cent with respect to the initial 3-D model. The model reveals several features that relate to geological observations, such as sedimentary basins, exhumed batholiths, and contrasting lithologies across faults. The quality of the new model is validated by quantifying waveform misfits of full-length seismograms from 91 earthquakes that were not used in the tomographic inversion. The new model provides more accurate synthetic seismograms that will benefit seismic hazard assessment.

  1. Adjoint Tomography of the Southern California Crust (Invited) (Invited)

    NASA Astrophysics Data System (ADS)

    Tape, C.; Liu, Q.; Maggi, A.; Tromp, J.

    2009-12-01

    We iteratively improve a three-dimensional tomographic model of the southern California crust using numerical simulations of seismic wave propagation based on a spectral-element method (SEM) in combination with an adjoint method. The initial 3D model is provided by the Southern California Earthquake Center. The dataset comprises three-component seismic waveforms (i.e. both body and surface waves), filtered over the period range 2-30 s, from 143 local earthquakes recorded by a network of 203 stations. Time windows for measurements are automatically selected by the FLEXWIN algorithm. The misfit function in the tomographic inversion is based on frequency-dependent multitaper traveltime differences. The gradient of the misfit function and related finite-frequency sensitivity kernels for each earthquake are computed using an adjoint technique. The kernels are combined using a source subspace projection method to compute a model update at each iteration of a gradient-based minimization algorithm. The inversion involved 16 iterations, which required 6800 wavefield simulations and a total of 0.8 million CPU hours. The new crustal model, m16, is described in terms of independent shear (Vs) and bulk-sound (Vb) wavespeed variations. It exhibits strong heterogeneity, including local changes of ±30% with respect to the initial 3D model. The model reveals several features that relate to geologic observations, such as sedimentary basins, exhumed batholiths, and contrasting lithologies across faults. The quality of the new model is validated by quantifying waveform misfits of full-length seismograms from 91 earthquakes that were not used in the tomographic inversion. The new model provides more accurate synthetic seismograms that will benefit seismic hazard assessment.

  2. Time-lapse seismic waveform inversion for monitoring near-surface microbubble injection

    NASA Astrophysics Data System (ADS)

    Kamei, R.; Jang, U.; Lumley, D. E.; Mouri, T.; Nakatsukasa, M.; Takanashi, M.

    2016-12-01

    Seismic monitoring of the Earth provides valuable information regarding the time-varying changes in subsurface physical properties that are caused by natural or man-made processes. However, the resulting changes in subsurface properties are often small both in terms of magnitude and spatial extent, leading to seismic data differences that are difficult to detect at typical non-repeatable noise levels. In order to better extract information from the time-lapse data, exploiting the full seismic waveform information can be critical, since detected amplitude or traveltime changes may be minimal. We explore methods of waveform inversion that estimate an optimal model of time-varying elastic parameters at the wavelength scale to fit the observed time-lapse seismic data with modelled waveforms based on numerical solutions of the wave equation. We apply acoustic waveform inversion to time-lapse cross-well monitoring surveys of 64-m well intervals, and estimate the velocity changes that occur during the injection of microbubble water into shallow unconsolidated Quaternary sediments in the Kanto basin of Japan at a depth of 25 m below the surface. Microbubble water is comprised of water infused with air bubbles of a diameter less than 0.1mm, and may be useful to improve resistance to ground liquefaction during major earthquakes. Monitoring the space-time distribution and physical properties of microbubble injection is therefore important to understanding the full potential of the technique. Repeated monitoring surveys (>10) reveal transient behaviours in waveforms during microbubble injection. Time-lapse waveform inversion detects changes in P-wave velocity of less than 1 percent, initially as velocity increases and subsequently as velocity decreases. The velocity changes are mainly imaged within a thin (1 m) layer between the injection and the receiver well, inferring the fluid-flow influence of the fluvial sediment depositional environment. The resulting velocity models fit the observed waveforms very well, supporting the validity of the estimated velocity changes. In order to further improve the estimation of velocity changes, we investigate the limitations of acoustic waveform inversion, and apply elastic waveform inversion to the time-lapse data set.

  3. Source-independent full waveform inversion of seismic data

    DOEpatents

    Lee, Ki Ha

    2006-02-14

    A set of seismic trace data is collected in an input data set that is first Fourier transformed in its entirety into the frequency domain. A normalized wavefield is obtained for each trace of the input data set in the frequency domain. Normalization is done with respect to the frequency response of a reference trace selected from the set of seismic trace data. The normalized wavefield is source independent, complex, and dimensionless. The normalized wavefield is shown to be uniquely defined as the normalized impulse response, provided that a certain condition is met for the source. This property allows construction of the inversion algorithm disclosed herein, without any source or source coupling information. The algorithm minimizes the error between data normalized wavefield and the model normalized wavefield. The methodology is applicable to any 3-D seismic problem, and damping may be easily included in the process.

  4. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  5. Probabilistic joint inversion of waveforms and polarity data for double-couple focal mechanisms of local earthquakes

    NASA Astrophysics Data System (ADS)

    Wéber, Zoltán

    2018-06-01

    Estimating the mechanisms of small (M < 4) earthquakes is quite challenging. A common scenario is that neither the available polarity data alone nor the well predictable near-station seismograms alone are sufficient to obtain reliable focal mechanism solutions for weak events. To handle this situation we introduce here a new method that jointly inverts waveforms and polarity data following a probabilistic approach. The procedure called joint waveform and polarity (JOWAPO) inversion maps the posterior probability density of the model parameters and estimates the maximum likelihood double-couple mechanism, the optimal source depth and the scalar seismic moment of the investigated event. The uncertainties of the solution are described by confidence regions. We have validated the method on two earthquakes for which well-determined focal mechanisms are available. The validation tests show that including waveforms in the inversion considerably reduces the uncertainties of the usually poorly constrained polarity solutions. The JOWAPO method performs best when it applies waveforms from at least two seismic stations. If the number of the polarity data is large enough, even single-station JOWAPO inversion can produce usable solutions. When only a few polarities are available, however, single-station inversion may result in biased mechanisms. In this case some caution must be taken when interpreting the results. We have successfully applied the JOWAPO method to an earthquake in North Hungary, whose mechanism could not be estimated by long-period waveform inversion. Using 17 P-wave polarities and waveforms at two nearby stations, the JOWAPO method produced a well-constrained focal mechanism. The solution is very similar to those obtained previously for four other events that occurred in the same earthquake sequence. The analysed event has a strike-slip mechanism with a P axis oriented approximately along an NE-SW direction.

  6. Pseudo 2D elastic waveform inversion for attenuation in the near surface

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Zhang, Jie

    2017-08-01

    Seismic waveform propagation could be significantly affected by heterogeneities in the near surface zone (0 m-500 m depth). As a result, it is important to obtain as much near surface information as possible. Seismic attenuation, characterized by QP and QS factors, may affect seismic waveform in both phase and amplitude; however, it is rarely estimated and applied to the near surface zone for seismic data processing. Applying a 1D elastic full waveform modelling program, we demonstrate that such effects cannot be overlooked in the waveform computation if the value of the Q factor is lower than approximately 100. Further, we develop a pseudo 2D elastic waveform inversion method in the common midpoint (CMP) domain that jointly inverts early arrivals for QP and surface waves for QS. In this method, although the forward problem is in 1D, by applying 2D model regularization, we obtain 2D QP and QS models through simultaneous inversion. A cross-gradient constraint between the QP and Qs models is applied to ensure structural consistency of the 2D inversion results. We present synthetic examples and a real case study from an oil field in China.

  7. Inferring global upper-mantle shear attenuation structure by waveform tomography using the spectral element method

    NASA Astrophysics Data System (ADS)

    Karaoǧlu, Haydar; Romanowicz, Barbara

    2018-06-01

    We present a global upper-mantle shear wave attenuation model that is built through a hybrid full-waveform inversion algorithm applied to long-period waveforms, using the spectral element method for wavefield computations. Our inversion strategy is based on an iterative approach that involves the inversion for successive updates in the attenuation parameter (δ Q^{-1}_μ) and elastic parameters (isotropic velocity VS, and radial anisotropy parameter ξ) through a Gauss-Newton-type optimization scheme that employs envelope- and waveform-type misfit functionals for the two steps, respectively. We also include source and receiver terms in the inversion steps for attenuation structure. We conducted a total of eight iterations (six for attenuation and two for elastic structure), and one inversion for updates to source parameters. The starting model included the elastic part of the relatively high-resolution 3-D whole mantle seismic velocity model, SEMUCB-WM1, which served to account for elastic focusing effects. The data set is a subset of the three-component surface waveform data set, filtered between 400 and 60 s, that contributed to the construction of the whole-mantle tomographic model SEMUCB-WM1. We applied strict selection criteria to this data set for the attenuation iteration steps, and investigated the effect of attenuation crustal structure on the retrieved mantle attenuation structure. While a constant 1-D Qμ model with a constant value of 165 throughout the upper mantle was used as starting model for attenuation inversion, we were able to recover, in depth extent and strength, the high-attenuation zone present in the depth range 80-200 km. The final 3-D model, SEMUCB-UMQ, shows strong correlation with tectonic features down to 200-250 km depth, with low attenuation beneath the cratons, stable parts of continents and regions of old oceanic crust, and high attenuation along mid-ocean ridges and backarcs. Below 250 km, we observe strong attenuation in the southwestern Pacific and eastern Africa, while low attenuation zones fade beneath most of the cratons. The strong negative correlation of Q^{-1}_μ and VS anomalies at shallow upper-mantle depths points to a common dominant origin for the two, likely due to variations in thermal structure. A comparison with two other global upper-mantle attenuation models shows promising consistency. As we updated the elastic 3-D model in alternate iterations, we found that the VS part of the model was stable, while the ξ structure evolution was more pronounced, indicating that it may be important to include 3-D attenuation effects when inverting for ξ, possibly due to the influence of dispersion corrections on this less well-constrained parameter.

  8. Intracardiac impedance response during acute AF internal cardioversion using novel rectilinear and capacitor-discharge waveforms.

    PubMed

    Rababah, A S; Walsh, S J; Manoharan, G; Walsh, P R; Escalona, O J

    2016-07-01

    Intracardiac impedance (ICI) is a major determinant of success during internal cardioversion of atrial fibrillation (AF). However, there have been few studies that have examined the dynamic behaviour of atrial impedance during internal cardioversion in relation to clinical outcome. In this study, voltage and current waveforms captured during internal cardioversion of acute AF in ovine models using novel radiofrequency (RF) generated low-tilt rectilinear and conventional capacitor-discharge based shock waveforms were retrospectively analysed using a digital signal processing algorithm to investigate the dynamic behaviour of atrial impedance during cardioversion. The algorithm was specifically designed to facilitate the simultaneous analysis of multiple impedance parameters, including: mean intracardiac impedance (Z M), intracardiac impedance variance (ICIV) and impedance amplitude spectrum area (IAMSA) for each cardioversion event. A significant reduction in ICI was observed when comparing two successive shocks of increasing energy where cardioversion outcome was successful. In addition, ICIV and IAMSA variables were found to inversely correlate to the magnitude of energy delivered; with a stronger correlation found to the former parameter. In conclusion, ICIV and IAMSA have been evidenced as two key dynamic intracardiac impedance variables that may prove useful in better understanding of the cardioversion process and that could potentially act as prognostic markers with respect to clinical outcome.

  9. 3D frequency-domain ultrasound waveform tomography breast imaging

    NASA Astrophysics Data System (ADS)

    Sandhu, Gursharan Yash; West, Erik; Li, Cuiping; Roy, Olivier; Duric, Neb

    2017-03-01

    Frequency-domain ultrasound waveform tomography is a promising method for the visualization and characterization of breast disease. It has previously been shown to accurately reconstruct the sound speed distributions of breasts of varying densities. The reconstructed images show detailed morphological and quantitative information that can help differentiate different types of breast disease including benign and malignant lesions. The attenuation properties of an ex vivo phantom have also been assessed. However, the reconstruction algorithms assumed a 2D geometry while the actual data acquisition process was not. Although clinically useful sound speed images can be reconstructed assuming this mismatched geometry, artifacts from the reconstruction process exist within the reconstructed images. This is especially true for registration across different modalities and when the 2D assumption is violated. For example, this happens when a patient's breast is rapidly sloping. It is also true for attenuation imaging where energy lost or gained out of the plane gets transformed into artifacts within the image space. In this paper, we will briefly review ultrasound waveform tomography techniques, give motivation for pursuing the 3D method, discuss the 3D reconstruction algorithm, present the results of 3D forward modeling, show the mismatch that is induced by the violation of 3D modeling via numerical simulations, and present a 3D inversion of a numerical phantom.

  10. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.

  11. Density reconstruction in multiparameter elastic full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Sun, Min'ao; Yang, Jizhong; Dong, Liangguo; Liu, Yuzhu; Huang, Chao

    2017-12-01

    Elastic full-waveform inversion (EFWI) is a quantitative data fitting procedure that recovers multiple subsurface parameters from multicomponent seismic data. As density is involved in addition to P- and S-wave velocities, the multiparameter EFWI suffers from more serious tradeoffs. In addition, compared with P- and S-wave velocities, the misfit function is less sensitive to density perturbation. Thus, a robust density reconstruction remains a difficult problem in multiparameter EFWI. In this paper, we develop an improved scattering-integral-based truncated Gauss-Newton method to simultaneously recover P- and S-wave velocities and density in EFWI. In this method, the inverse Gauss-Newton Hessian has been estimated by iteratively solving the Gauss-Newton equation with a matrix-free conjugate gradient algorithm. Therefore, it is able to properly handle the parameter tradeoffs. To give a detailed illustration of the tradeoffs between P- and S-wave velocities and density in EFWI, wavefield-separated sensitivity kernels and the Gauss-Newton Hessian are numerically computed, and their distribution characteristics are analyzed. Numerical experiments on a canonical inclusion model and a modified SEG/EAGE Overthrust model have demonstrated that the proposed method can effectively mitigate the tradeoff effects, and improve multiparameter gradients. Thus, a high convergence rate and an accurate density reconstruction can be achieved.

  12. Towards seismic waveform inversion of long-offset Ocean-Bottom Seismic data for deep crustal imaging offshore Western Australia

    NASA Astrophysics Data System (ADS)

    Monnier, S.; Lumley, D. E.; Kamei, R.; Goncharov, A.; Shragge, J. C.

    2016-12-01

    Ocean Bottom Seismic datasets have become increasingly used in recent years to develop high-resolution, wavelength-scale P-wave velocity models of the lithosphere from waveform inversion, due to their recording of long-offset transmitted phases. New OBS surveys evolve towards novel acquisition geometries involving longer offsets (several hundreds of km), broader frequency content (1-100 Hz), while receiver sampling often remains sparse (several km). Therefore, it is critical to assess the effects of such geometries on the eventual success and resolution of waveform inversion velocity models. In this study, we investigate the feasibility of waveform inversion on the Bart 2D OBS profile acquired offshore Western Australia, to investigate regional crustal and Moho structures. The dataset features 14 broadband seismometers (0.01-100 Hz) from AuScope's national OBS fleet, offsets in excess of 280 km, and a sparse receiver sampling (18 km). We perform our analysis in four stages: (1) field data analysis, (2) 2D P-wave velocity model building, synthetic data (3) modelling, and (4) waveform inversion. Data exploration shows high-quality active-source signal down to 2Hz, and usable first arrivals to offsets greater than 100 km. The background velocity model is constructed by combining crustal and Moho information in continental reference models (e.g., AuSREM, AusMoho). These low-resolution studies suggest a crustal thickness of 20-25 km along our seismic line and constitute a starting point for synthetic modelling and inversion. We perform synthetic 2D time-domain modelling to: (1) evaluate the misfit between synthetic and field data within the usable frequency band (2-10 Hz); (2) validate our velocity model; and (3) observe the effects of sparse OBS interval on data quality. Finally, we apply 2D acoustic frequency-domain waveform inversion to the synthetic data to generate velocity model updates. The inverted model is compared to the reference model to investigate the improved crustal resolution and Moho boundary delineation that could be realized using waveform inversion, and to evaluate the effects of the acquisition parameters. The inversion strategies developed through the synthetic tests will help the subsequent inversion of sparse, long-offset OBS field data.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu; Gao, Kai; Huang, Lianjie

    Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less

  14. Modeling of an electrohydraulic lithotripter with the KZK equation.

    PubMed

    Averkiou, M A; Cleveland, R O

    1999-07-01

    The acoustic pressure field of an electrohydraulic extracorporeal shock wave lithotripter is modeled with a nonlinear parabolic wave equation (the KZK equation). The model accounts for diffraction, nonlinearity, and thermoviscous absorption. A numerical algorithm for solving the KZK equation in the time domain is used to model sound propagation from the mouth of the ellipsoidal reflector of the lithotripter. Propagation within the reflector is modeled with geometrical acoustics. It is shown that nonlinear distortion within the ellipsoidal reflector can play an important role for certain parameters. Calculated waveforms are compared with waveforms measured in a clinical lithotripter and good agreement is found. It is shown that the spatial location of the maximum negative pressure occurs pre-focally which suggests that the strongest cavitation activity will also be in front of the focus. Propagation of shock waves from a lithotripter with a pressure release reflector is considered and because of nonlinear propagation the focal waveform is not the inverse of the rigid reflector. Results from propagation through tissue are presented; waveforms are similar to those predicted in water except that the higher absorption in the tissue decreases the peak amplitude and lengthens the rise time of the shock.

  15. Stochastic Seismic Inversion and Migration for Offshore Site Investigation in the Northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Son, J.; Medina-Cetina, Z.

    2017-12-01

    We discuss the comparison between deterministic and stochastic optimization approaches to the nonlinear geophysical full-waveform inverse problem, based on the seismic survey data from Mississippi Canyon in the Northern Gulf of Mexico. Since the subsea engineering and offshore construction projects actively require reliable ground models from various site investigations, the primary goal of this study is to reconstruct the accurate subsurface information of the soil and rock material profiles under the seafloor. The shallow sediment layers have naturally formed heterogeneous formations which may cause unwanted marine landslides or foundation failures of underwater infrastructure. We chose the quasi-Newton and simulated annealing as deterministic and stochastic optimization algorithms respectively. Seismic forward modeling based on finite difference method with absorbing boundary condition implements the iterative simulations in the inverse modeling. We briefly report on numerical experiments using a synthetic data as an offshore ground model which contains shallow artificial target profiles of geomaterials under the seafloor. We apply the seismic migration processing and generate Voronoi tessellation on two-dimensional space-domain to improve the computational efficiency of the imaging stratigraphical velocity model reconstruction. We then report on the detail of a field data implementation, which shows the complex geologic structures in the Northern Gulf of Mexico. Lastly, we compare the new inverted image of subsurface site profiles in the space-domain with the previously processed seismic image in the time-domain at the same location. Overall, stochastic optimization for seismic inversion with migration and Voronoi tessellation show significant promise to improve the subsurface imaging of ground models and improve the computational efficiency required for the full waveform inversion. We anticipate that by improving the inversion process of shallow layers from geophysical data will better support the offshore site investigation.

  16. 3D Acoustic Full Waveform Inversion for Engineering Purpose

    NASA Astrophysics Data System (ADS)

    Lim, Y.; Shin, S.; Kim, D.; Kim, S.; Chung, W.

    2017-12-01

    Seismic waveform inversion is the most researched data processing technique. In recent years, with an increase in marine development projects, seismic surveys are commonly conducted for engineering purposes; however, researches for application of waveform inversion are insufficient. The waveform inversion updates the subsurface physical property by minimizing the difference between modeled and observed data. Furthermore, it can be used to generate an accurate subsurface image; however, this technique consumes substantial computational resources. Its most compute-intensive step is the calculation of the gradient and hessian values. This aspect gains higher significance in 3D as compared to 2D. This paper introduces a new method for calculating gradient and hessian values, in an effort to reduce computational overburden. In the conventional waveform inversion, the calculation area covers all sources and receivers. In seismic surveys for engineering purposes, the number of receivers is limited. Therefore, it is inefficient to construct the hessian and gradient for the entire region (Figure 1). In order to tackle this problem, we calculate the gradient and the hessian for a single shot within the range of the relevant source and receiver. This is followed by summing up of these positions for the entire shot (Figure 2). In this paper, we demonstrate that reducing the area of calculation of the hessian and gradient for one shot reduces the overall amount of computation and therefore, the computation time. Furthermore, it is proved that the waveform inversion can be suitably applied for engineering purposes. In future research, we propose to ascertain an effective calculation range. This research was supported by the Basic Research Project(17-3314) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  17. Frequency domain, waveform inversion of laboratory crosswell radar data

    USGS Publications Warehouse

    Ellefsen, Karl J.; Mazzella, Aldo T.; Horton, Robert J.; McKenna, Jason R.

    2010-01-01

    A new waveform inversion for crosswell radar is formulated in the frequency-domain for a 2.5D model. The inversion simulates radar waves using the vector Helmholtz equation for electromagnetic waves. The objective function is minimized using a backpropagation method suitable for a 2.5D model. The inversion is tested by processing crosswell radar data collected in a laboratory tank. The estimated model is consistent with the known electromagnetic properties of the tank. The formulation for the 2.5D model can be extended to inversions of acoustic and elastic data.

  18. A new algorithm for three-dimensional joint inversion of body wave and surface wave data and its application to the Southern California plate boundary region

    NASA Astrophysics Data System (ADS)

    Fang, Hongjian; Zhang, Haijiang; Yao, Huajian; Allam, Amir; Zigone, Dimitri; Ben-Zion, Yehuda; Thurber, Clifford; van der Hilst, Robert D.

    2016-05-01

    We introduce a new algorithm for joint inversion of body wave and surface wave data to get better 3-D P wave (Vp) and S wave (Vs) velocity models by taking advantage of the complementary strengths of each data set. Our joint inversion algorithm uses a one-step inversion of surface wave traveltime measurements at different periods for 3-D Vs and Vp models without constructing the intermediate phase or group velocity maps. This allows a more straightforward modeling of surface wave traveltime data with the body wave arrival times. We take into consideration the sensitivity of surface wave data with respect to Vp in addition to its large sensitivity to Vs, which means both models are constrained by two different data types. The method is applied to determine 3-D crustal Vp and Vs models using body wave and Rayleigh wave data in the Southern California plate boundary region, which has previously been studied with both double-difference tomography method using body wave arrival times and ambient noise tomography method with Rayleigh and Love wave group velocity dispersion measurements. Our approach creates self-consistent and unique models with no prominent gaps, with Rayleigh wave data resolving shallow and large-scale features and body wave data constraining relatively deeper structures where their ray coverage is good. The velocity model from the joint inversion is consistent with local geological structures and produces better fits to observed seismic waveforms than the current Southern California Earthquake Center (SCEC) model.

  19. Numerical results for near surface time domain electromagnetic exploration: a full waveform approach

    NASA Astrophysics Data System (ADS)

    Sun, H.; Li, K.; Li, X., Sr.; Liu, Y., Sr.; Wen, J., Sr.

    2015-12-01

    Time domain or Transient electromagnetic (TEM) survey including types with airborne, semi-airborne and ground play important roles in applicants such as geological surveys, ground water/aquifer assess [Meju et al., 2000; Cox et al., 2010], metal ore exploration [Yang and Oldenburg, 2012], prediction of water bearing structures in tunnels [Xue et al., 2007; Sun et al., 2012], UXO exploration [Pasion et al., 2007; Gasperikova et al., 2009] etc. The common practice is introducing a current into a transmitting (Tx) loop and acquire the induced electromagnetic field after the current is cut off [Zhdanov and Keller, 1994]. The current waveforms are different depending on instruments. Rectangle is the most widely used excitation current source especially in ground TEM. Triangle and half sine are commonly used in airborne and semi-airborne TEM investigation. In most instruments, only the off time responses are acquired and used in later analysis and data inversion. Very few airborne instruments acquire the on time and off time responses together. Although these systems acquire the on time data, they usually do not use them in the interpretation.This abstract shows a novel full waveform time domain electromagnetic method and our recent modeling results. The benefits comes from our new algorithm in modeling full waveform time domain electromagnetic problems. We introduced the current density into the Maxwell's equation as the transmitting source. This approach allows arbitrary waveforms, such as triangle, half-sine, trapezoidal waves or scatter record from equipment, being used in modeling. Here, we simulate the establishing and induced diffusion process of the electromagnetic field in the earth. The traditional time domain electromagnetic with pure secondary fields can also be extracted from our modeling results. The real time responses excited by a loop source can be calculated using the algorithm. We analyze the full time gates responses of homogeneous half space and two layered models with half sine current waveform as examples. We find the on time responses are quite sensitive to resistivity or depth changes. The results show the potential use of full waveform responses in time domain electromagnetic surveys.

  20. Imaging Crustal Structure with Waveform and HV Ratio of Body-wave Receiver Function

    NASA Astrophysics Data System (ADS)

    Chong, J.; Chu, R.; Ni, S.; Meng, Q.; Guo, A.

    2017-12-01

    It is known that receiver function has less constraint on the absolute velocity, and joint inversion of receiver function and surface wave dispersion has been widely applied to reduce the non-uniqueness of velocity and interface depth. However, some studies indicate that the receiver function itself is capable for determining the absolute shear wave velocity. In this study, we propose to measure the receiver function HV ratio which takes advantage of the amplitude information of the radial and vertical receiver functions to constrain the shear-wave velocity. Numerical analysis indicates that the receiver function HV ratio is sensitive to the average shear wave velocity in the depth range it samples, and can help to reduce the non-uniqueness of receiver function waveform inversion. A joint inversion scheme has been developed, and both synthetic tests and real data application proved the feasibility of the joint inversion. The method has been applied to the dense seismic array of ChinArray program in SE Tibet during the time period from August 2011 to August 2012 in SE Tibet (ChinArray-Himalaya, 2011). The measurements of receiver function HV ratio reveals the lateral variation of the tectonics in of the study region. And main features of the velocity structure imagined by the new joint inversion method are consistent with previous studies. KEYWORDS: receiver function HV ratio, receiver function waveform inversion, crustal structure ReferenceChinArray-Himalaya. 2011. China Seismic Array waveform data of Himalaya Project. Institute of Geophysics, China Earthquake Administration. doi:10.12001/ChinArray.Data. Himalaya. Jiajun Chong, Risheng Chu*, Sidao Ni, Qingjun Meng, Aizhi Guo, 2017. Receiver Function HV Ratio, a New Measurement for Reducing Non-uniqueness of Receiver Function Waveform Inversion. (under revision)

  1. Landquake dynamics inferred from seismic source inversion: Greenland and Sichuan events of 2017

    NASA Astrophysics Data System (ADS)

    Chao, W. A.

    2017-12-01

    In June 2017 two catastrophic landquake events occurred in Greenland and Sichuan. The Greenland event leads to tsunami hazard in the small town of Nuugaarsiaq. A landquake in Sichuan hit the town, which resulted in over 100 death. Both two events generated the strong seismic signals recorded by the real-time global seismic network. I adopt an inversion algorithm to derive the landquake force time history (LFH) using the long-period waveforms, and the landslide volume ( 76 million m3) can be rapidly estimated, facilitating the tsunami-wave modeling for early warning purpose. Based on an integrated approach involving tsunami forward simulation and seismic waveform inversion, this study has significant implications to issuing actionable warnings before hazardous tsunami waves strike populated areas. Two single-forces (SFs) mechanism (two block model) yields the best explanation for Sichuan event, which demonstrates that secondary event (seismic inferred volume: 8.2 million m3) may be mobilized by collapse-mass hitting from initial rock avalanches ( 5.8 million m3), likely causing a catastrophic disaster. The later source with a force magnitude of 0.9967×1011 N occurred 70 seconds after first mass-movement occurrence. In contrast, first event has the smaller force magnitude of 0.8116×1011 N. In conclusion, seismically inferred physical parameters will substantially contribute to improving our understanding of landquake source mechanisms and mitigating similar hazards in other parts of the world.

  2. Frozen Gaussian approximation for 3D seismic tomography

    NASA Astrophysics Data System (ADS)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2018-05-01

    Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.

  3. Studies of earthquakes and microearthquakes using near-field seismic and geodetic observations

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas Bartholomew

    The Centroid-Moment Tensor (CMT) method allows an optimal point-source description of an earthquake to be recovered from a set of seismic observations, and, for over 30 years, has been routinely applied to determine the location and source mechanism of teleseismically recorded earthquakes. The CMT approach is, however, entirely general: any measurements of seismic displacement fields could, in theory, be used within the CMT inversion formulation, so long as the treatment of the earthquake as a point source is valid for that data. We modify the CMT algorithm to enable a variety of near-field seismic observables to be inverted for the source parameters of an earthquake. The first two data types that we implement are provided by Global Positioning System receivers operating at sampling frequencies of 1,Hz and above. When deployed in the seismic near field, these instruments may be used as long-period-strong-motion seismometers, recording displacement time series that include the static offset. We show that both the displacement waveforms, and static displacements alone, can be used to obtain CMT solutions for moderate-magnitude earthquakes, and that performing analyses using these data may be useful for earthquake early warning. We also investigate using waveform recordings - made by conventional seismometers deployed at the surface, or by geophone arrays placed in boreholes - to determine CMT solutions, and their uncertainties, for microearthquakes induced by hydraulic fracturing. A similar waveform inversion approach could be applied in many other settings where induced seismicity and microseismicity occurs..

  4. Intelligent earthquake data processing for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Li, T.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Tromp, J.

    2016-12-01

    Due to the increased computational capability afforded by modern and future computing architectures, the seismology community is demanding a more comprehensive understanding of the full waveform information from the recorded earthquake seismograms. Global waveform tomography is a complex workflow that matches observed seismic data with synthesized seismograms by iteratively updating the earth model parameters based on the adjoint state method. This methodology allows us to compute a very accurate model of the earth's interior. The synthetic data is simulated by solving the wave equation in the entire globe using a spectral-element method. In order to ensure the inversion accuracy and stability, both the synthesized and observed seismograms must be carefully pre-processed. Because the scale of the inversion problem is extremely large and there is a very large volume of data to both be read and written, an efficient and reliable pre-processing workflow must be developed. We are investigating intelligent algorithms based on a machine-learning (ML) framework that will automatically tune parameters for the data processing chain. One straightforward application of ML in data processing is to classify all possible misfit calculation windows into usable and unusable ones, based on some intelligent ML models such as neural network, support vector machine or principle component analysis. The intelligent earthquake data processing framework will enable the seismology community to compute the global waveform tomography using seismic data from an arbitrarily large number of earthquake events in the fastest, most efficient way.

  5. Extracting Low-Frequency Information from Time Attenuation in Elastic Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong

    2017-03-01

    Low-frequency information is crucial for recovering background velocity, but the lack of low-frequency information in field data makes inversion impractical without accurate initial models. Laplace-Fourier domain waveform inversion can recover a smooth model from real data without low-frequency information, which can be used for subsequent inversion as an ideal starting model. In general, it also starts with low frequencies and includes higher frequencies at later inversion stages, while the difference is that its ultralow frequency information comes from the Laplace-Fourier domain. Meanwhile, a direct implementation of the Laplace-transformed wavefield using frequency domain inversion is also very convenient. However, because broad frequency bands are often used in the pure time domain waveform inversion, it is difficult to extract the wavefields dominated by low frequencies in this case. In this paper, low-frequency components are constructed by introducing time attenuation into the recorded residuals, and the rest of the method is identical to the traditional time domain inversion. Time windowing and frequency filtering are also applied to mitigate the ambiguity of the inverse problem. Therefore, we can start at low frequencies and to move to higher frequencies. The experiment shows that the proposed method can achieve a good inversion result in the presence of a linear initial model and records without low-frequency information.

  6. Pulsed excitation terahertz tomography - multiparametric approach

    NASA Astrophysics Data System (ADS)

    Lopato, Przemyslaw

    2018-04-01

    This article deals with pulsed excitation terahertz computed tomography (THz CT). Opposite to x-ray CT, where just a single value (pixel) is obtained, in case of pulsed THz CT the time signal is acquired for each position. Recorded waveform can be parametrized - many features carrying various information about examined structure can be calculated. Based on this, multiparametric reconstruction algorithm was proposed: inverse Radon transform based reconstruction is applied for each parameter and then fusion of results is utilized. Performance of the proposed imaging scheme was experimentally verified using dielectric phantoms.

  7. Wavelet-based multiscale adjoint waveform-difference tomography using body and surface waves

    NASA Astrophysics Data System (ADS)

    Yuan, Y. O.; Simons, F. J.; Bozdag, E.

    2014-12-01

    We present a multi-scale scheme for full elastic waveform-difference inversion. Using a wavelet transform proves to be a key factor to mitigate cycle-skipping effects. We start with coarse representations of the seismogram to correct a large-scale background model, and subsequently explain the residuals in the fine scales of the seismogram to map the heterogeneities with great complexity. We have previously applied the multi-scale approach successfully to body waves generated in a standard model from the exploration industry: a modified two-dimensional elastic Marmousi model. With this model we explored the optimal choice of wavelet family, number of vanishing moments and decomposition depth. For this presentation we explore the sensitivity of surface waves in waveform-difference tomography. The incorporation of surface waves is rife with cycle-skipping problems compared to the inversions considering body waves only. We implemented an envelope-based objective function probed via a multi-scale wavelet analysis to measure the distance between predicted and target surface-wave waveforms in a synthetic model of heterogeneous near-surface structure. Our proposed method successfully purges the local minima present in the waveform-difference misfit surface. An elastic shallow model with 100~m in depth is used to test the surface-wave inversion scheme. We also analyzed the sensitivities of surface waves and body waves in full waveform inversions, as well as the effects of incorrect density information on elastic parameter inversions. Based on those numerical experiments, we ultimately formalized a flexible scheme to consider both body and surface waves in adjoint tomography. While our early examples are constructed from exploration-style settings, our procedure will be very valuable for the study of global network data.

  8. A Modified Subpulse SAR Processing Procedure Based on the Range-Doppler Algorithm for Synthetic Wideband Waveforms

    PubMed Central

    Lim, Byoung-Gyun; Woo, Jea-Choon; Lee, Hee-Young; Kim, Young-Soo

    2008-01-01

    Synthetic wideband waveforms (SWW) combine a stepped frequency CW waveform and a chirp signal waveform to achieve high range resolution without requiring a large bandwidth or the consequent very high sampling rate. If an efficient algorithm like the range-Doppler algorithm (RDA) is used to acquire the SAR images for synthetic wideband signals, errors occur due to approximations, so the images may not show the best possible result. This paper proposes a modified subpulse SAR processing algorithm for synthetic wideband signals which is based on RDA. An experiment with an automobile-based SAR system showed that the proposed algorithm is quite accurate with a considerable improvement in resolution and quality of the obtained SAR image. PMID:27873984

  9. Designing Waveform Sets with Good Correlation and Stopband Properties for MIMO Radar via the Gradient-Based Method

    PubMed Central

    Tang, Liang; Zhu, Yongfeng; Fu, Qiang

    2017-01-01

    Waveform sets with good correlation and/or stopband properties have received extensive attention and been widely used in multiple-input multiple-output (MIMO) radar. In this paper, we aim at designing unimodular waveform sets with good correlation and stopband properties. To formulate the problem, we construct two criteria to measure the correlation and stopband properties and then establish an unconstrained problem in the frequency domain. After deducing the phase gradient and the step size, an efficient gradient-based algorithm with monotonicity is proposed to minimize the objective function directly. For the design problem without considering the correlation weights, we develop a simplified algorithm, which only requires a few fast Fourier transform (FFT) operations and is more efficient. Because both of the algorithms can be implemented via the FFT operations and the Hadamard product, they are computationally efficient and can be used to design waveform sets with a large waveform number and waveform length. Numerical experiments show that the proposed algorithms can provide better performance than the state-of-the-art algorithms in terms of the computational complexity. PMID:28468308

  10. Designing Waveform Sets with Good Correlation and Stopband Properties for MIMO Radar via the Gradient-Based Method.

    PubMed

    Tang, Liang; Zhu, Yongfeng; Fu, Qiang

    2017-05-01

    Waveform sets with good correlation and/or stopband properties have received extensive attention and been widely used in multiple-input multiple-output (MIMO) radar. In this paper, we aim at designing unimodular waveform sets with good correlation and stopband properties. To formulate the problem, we construct two criteria to measure the correlation and stopband properties and then establish an unconstrained problem in the frequency domain. After deducing the phase gradient and the step size, an efficient gradient-based algorithm with monotonicity is proposed to minimize the objective function directly. For the design problem without considering the correlation weights, we develop a simplified algorithm, which only requires a few fast Fourier transform (FFT) operations and is more efficient. Because both of the algorithms can be implemented via the FFT operations and the Hadamard product, they are computationally efficient and can be used to design waveform sets with a large waveform number and waveform length. Numerical experiments show that the proposed algorithms can provide better performance than the state-of-the-art algorithms in terms of the computational complexity.

  11. Super-resolution processing for multi-functional LPI waveforms

    NASA Astrophysics Data System (ADS)

    Li, Zhengzheng; Zhang, Yan; Wang, Shang; Cai, Jingxiao

    2014-05-01

    Super-resolution (SR) is a radar processing technique closely related to the pulse compression (or correlation receiver). There are many super-resolution algorithms developed for the improved range resolution and reduced sidelobe contaminations. Traditionally, the waveforms used for the SR have been either phase-coding (such as LKP3 code, Barker code) or the frequency modulation (chirp, or nonlinear frequency modulation). There are, however, an important class of waveforms which are either random in nature (such as random noise waveform), or randomly modulated for multiple function operations (such as the ADS-B radar signals in [1]). These waveforms have the advantages of low-probability-of-intercept (LPI). If the existing SR techniques can be applied to these waveforms, there will be much more flexibility for using these waveforms in actual sensing missions. Also, SR usually has great advantage that the final output (as estimation of ground truth) is largely independent of the waveform. Such benefits are attractive to many important primary radar applications. In this paper the general introduction of the SR algorithms are provided first, and some implementation considerations are discussed. The selected algorithms are applied to the typical LPI waveforms, and the results are discussed. It is observed that SR algorithms can be reliably used for LPI waveforms, on the other hand, practical considerations should be kept in mind in order to obtain the optimal estimation results.

  12. Source encoding in multi-parameter full waveform inversion

    NASA Astrophysics Data System (ADS)

    Matharu, Gian; Sacchi, Mauricio D.

    2018-04-01

    Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.

  13. Cognitive Nonlinear Radar

    DTIC Science & Technology

    2013-01-01

    intelligently selecting waveform parameters using adaptive algorithms. The adaptive algorithms optimize the waveform parameters based on (1) the EM...the environment. 15. SUBJECT TERMS cognitive radar, adaptive sensing, spectrum sensing, multi-objective optimization, genetic algorithms, machine...detection and classification block diagram. .........................................................6 Figure 5. Genetic algorithm block diagram

  14. Full Waveform Inversion of Diving & Reflected Waves based on Scale Separation for Velocity and Impedance Imaging

    NASA Astrophysics Data System (ADS)

    Brossier, Romain; Zhou, Wei; Operto, Stéphane; Virieux, Jean

    2015-04-01

    Full Waveform Inversion (FWI) is an appealing method for quantitative high-resolution subsurface imaging (Virieux et al., 2009). For crustal-scales exploration from surface seismic, FWI generally succeeds in recovering a broadband of wavenumbers in the shallow part of the targeted medium taking advantage of the broad scattering-angle provided by both reflected and diving waves. In contrast, deeper targets are often only illuminated by short-spread reflections, which favor the reconstruction of the short wavelengths at the expense of the longer ones, leading to a possible notch in the intermediate part of the wavenumber spectrum. To update the velocity macromodel from reflection data, image-domain strategies (e.g., Symes & Carazzone, 1991) aim to maximize a semblance criterion in the migrated domain. Alternatively, recent data-domain strategies (e.g., Xu et al., 2012, Ma & Hale, 2013, Brossier et al., 2014), called Reflection FWI (RFWI), inspired by Chavent et al. (1994), rely on a scale separation between the velocity macromodel and prior knowledge of the reflectivity to emphasize the transmission regime in the sensitivity kernel of the inversion. However, all these strategies focus on reflected waves only, discarding the low-wavenumber information carried out by diving waves. With the current development of very long-offset and wide-azimuth acquisitions, a significant part of the recorded energy is provided by diving waves and subcritical reflections, and high-resolution tomographic methods should take advantage of all types of waves. In this presentation, we will first review the issues of classical FWI when applied to reflected waves and how RFWI is able to retrieve the long wavelength of the model. We then propose a unified formulation of FWI (Zhou et al., 2014) to update the low wavenumbers of the velocity model by the joint inversion of diving and reflected arrivals, while the impedance model is updated thanks to reflected wave only. An alternate inversion of high wavenumber impedance model and low wavenumber velocity model is performed to iteratively improve subsurface models. References : Brossier, R., Operto, S. & Virieux, J., 2014. Velocity model building from seismic reflection data by full waveform inversion, Geophysical Prospecting, doi:10.1111/1365-2478.12190 Chavent, G., Clément, F. & Gomez, S., 1994.Automatic determination of velocities via migration-based traveltime waveform inversion: A synthetic data example, SEG Technical Program Expanded Abstracts 1994, pp. 1179--1182. Ma, Y. & Hale, D., 2013. Wave-equation reflection traveltime inversion with dynamic warping and full waveform inversion, Geophysics, 78(6), R223--R233. Symes, W.W. & Carazzone, J.J., 1991. Velocity inversion by differential semblance optimization, Geophysics, 56, 654--663. Virieux, J. & Operto, S., 2009. An overview of full waveform inversion in exploration geophysics, Geophysics, 74(6), WCC1--WCC26. Xu, S., Wang, D., Chen, F., Lambaré, G. & Zhang, Y., 2012. Inversion on reflected seismic wave, SEG Technical Program Expanded Abstracts 2012, pp. 1--7. Zhou, W., Brossier, R., Operto, S., & Virieux, J., 2014. Acoustic multiparameter full-waveform inversion through a hierachical scheme, in SEG Technical Program Expanded Abstracts 2014, pp. 1249--1253

  15. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  16. Tsunami waveform inversion of the 2007 Bengkulu, southern Sumatra, earthquake

    NASA Astrophysics Data System (ADS)

    Fujii, Y.; Satake, K.

    2008-09-01

    We performed tsunami waveform inversions for the Bengkulu, southern Sumatra, earthquake on September 12, 2007 (Mw 8.4 by USGS). The tsunami was recorded at many tide gauge stations around the Indian Ocean and by a DART system in the deep ocean. The observed tsunami records indicate that the amplitudes were less than several tens of centimeters at most stations, around 1 m at Padang, the nearest station to the source, and a few centimeters at the DART station. For the tsunami waveform inversions, we adopted 20-, 15- and 10-subfault models. The tsunami waveforms computed from the estimated slip distributions explain the observed waveforms at most stations, regardless of the subfault model. We found that large slips were consistently estimated at the deeper part (>24 km) of the fault plane, located more than 100 km from the trench axis. The largest slips of 6-9 m were located about 100-200 km northwest of the epicenter. The deep slips may have contributed to the relatively small tsunami for its earthquake size. The total seismic moment is calculated as 4.7 × 1021 N m (Mw = 8.4) for the 10-subfault model, our preferred model from a comparison of tsunami waveforms at Cocos and the DART station.

  17. High resolution tsunami inversion for 2010 Chile earthquake

    NASA Astrophysics Data System (ADS)

    Wu, T.-R.; Ho, T.-C.

    2011-12-01

    We investigate the feasibility of inverting high-resolution vertical seafloor displacement from tsunami waveforms. An inversion method named "SUTIM" (small unit tsunami inversion method) is developed to meet this goal. In addition to utilizing the conventional least-square inversion, this paper also enhances the inversion resolution by Grid-Shifting method. A smooth constraint is adopted to gain stability. After a series of validation and performance tests, SUTIM is used to study the 2010 Chile earthquake. Based upon data quality and azimuthal distribution, we select tsunami waveforms from 6 GLOSS stations and 1 DART buoy record. In total, 157 sub-faults are utilized for the high-resolution inversion. The resolution reaches 10 sub-faults per wavelength. The result is compared with the distribution of the aftershocks and waveforms at each gauge location with very good agreement. The inversion result shows that the source profile features a non-uniform distribution of the seafloor displacement. The highly elevated vertical seafloor is mainly concentrated in two areas: one is located in the northern part of the epicentre, between 34° S and 36° S; the other is in the southern part, between 37° S and 38° S.

  18. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  19. Infrasound Waveform Inversion and Mass Flux Validation from Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, D.; Kim, K.; Yokoo, A.; Izbekov, P. E.; Lopez, T. M.; Prata, F.; Ahonen, P.; Kazahaya, R.; Nakamichi, H.; Iguchi, M.

    2015-12-01

    Recent advances in numerical wave propagation modeling and station coverage have permitted robust inversion of infrasound data from volcanic explosions. Complex topography and crater morphology have been shown to substantially affect the infrasound waveform, suggesting that homogeneous acoustic propagation assumptions are invalid. Infrasound waveform inversion provides an exciting tool to accurately characterize emission volume and mass flux from both volcanic and non-volcanic explosions. Mass flux, arguably the most sought-after parameter from a volcanic eruption, can be determined from the volume flux using infrasound waveform inversion if the volcanic flow is well-characterized. Thus far, infrasound-based volume and mass flux estimates have yet to be validated. In February 2015 we deployed six infrasound stations around the explosive Sakurajima Volcano, Japan for 8 days. Here we present our full waveform inversion method and volume and mass flux estimates of numerous high amplitude explosions using a high resolution DEM and 3-D Finite Difference Time Domain modeling. Application of this technique to volcanic eruptions may produce realistic estimates of mass flux and plume height necessary for volcanic hazard mitigation. Several ground-based instruments and methods are used to independently determine the volume, composition, and mass flux of individual volcanic explosions. Specifically, we use ground-based ash sampling, multispectral infrared imagery, UV spectrometry, and multigas data to estimate the plume composition and flux. Unique tiltmeter data from underground tunnels at Sakurajima also provides a way to estimate the volume and mass of each explosion. In this presentation we compare the volume and mass flux estimates derived from the different methods and discuss sources of error and future improvements.

  20. Full waveform inversion for ultrasonic flaw identification

    NASA Astrophysics Data System (ADS)

    Seidl, Robert; Rank, Ernst

    2017-02-01

    Ultrasonic Nondestructive Testing is concerned with detecting flaws inside components without causing physical damage. It is possible to detect flaws using ultrasound measurements but usually no additional details about the flaw like position, dimension or orientation are available. The information about these details is hidden in the recorded experimental signals. The idea of full waveform inversion is to adapt the parameters of an initial simulation model of the undamaged specimen by minimizing the discrepancy between these simulated signals and experimentally measured signals of the flawed specimen. Flaws in the structure are characterized by a change or deterioration in the material properties. Commonly, full waveform inversion is mostly applied in seismology on a larger scale to infer mechanical properties of the earth. We propose to use acoustic full waveform inversion for structural parameters to visualize the interior of the component. The method is adapted to US NDT by combining multiple similar experiments on the test component as the typical small amount of sensors is not sufficient for a successful imaging. It is shown that the combination of simulations and multiple experiments can be used to detect flaws and their position, dimension and orientation in emulated simulation cases.

  1. Modularized seismic full waveform inversion based on waveform sensitivity kernels - The software package ASKI

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel

    2015-04-01

    We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.

  2. Adaptive phase k-means algorithm for waveform classification

    NASA Astrophysics Data System (ADS)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  3. Imaging Faults in Carbonate Reservoir using Full Waveform Inversion and Reverse Time Migration of Walkaway VSP Data

    NASA Astrophysics Data System (ADS)

    Takam Takougang, E. M.; Bouzidi, Y.

    2016-12-01

    Multi-offset Vertical Seismic Profile (walkaway VSP) data were collected in an oil field located in a shallow water environment dominated by carbonate rocks, offshore the United Arab Emirates. The purpose of the survey was to provide structural information of the reservoir, around and away from the borehole. Five parallel lines were collected using an air gun at 25 m shot interval and 4 m source depth. A typical recording tool with 20 receivers spaced every 15.1 m, and located in a deviated borehole with an angle varying between 0 and 24 degree from the vertical direction, was used to record the data. The recording tool was deployed at different depths for each line, from 521 m to 2742 m depth. Smaller offsets were used for shallow receivers and larger offsets for deeper receivers. The lines merged to form the input dataset for waveform tomography. The total length of the combined lines was 9 km, containing 1344 shots and 100 receivers in the borehole located half-way down. Acoustic full waveform inversion was applied in the frequency domain to derive a high resolution velocity model. The final velocity model derived after the inversion using the frequencies 5-40 Hz, showed good correlation with velocities estimated from vertical incidence VSP and sonic log, confirming the success of the inversion. The velocity model showed anomalous low values in areas that correlate with known location of hydrocarbon reservoir. Pre-stack depth Reverse time migration was then applied using the final velocity model from waveform inversion and the up-going wavefield from the input data. The final estimated source signature from waveform inversion was used as input source for reverse time migration. To save computational memory and time, every 3 shots were used during reverse time migration and the data were low-pass filtered to 30 Hz. Migration artifacts were attenuated using a second order derivative filter. The final migration image shows a good correlation with the waveform tomography velocity model, and highlights a complex network of faults in the reservoir, that could be useful in understanding fluid and hydrocarbon movements. This study shows that the combination of full waveform tomography and reverse time migration can provide high resolution images that can enhance interpretation and characterization of oil reservoirs.

  4. Full-waveform Inversion of Crosshole GPR Data Collected in Strongly Heterogeneous Chalk: Challenges and Pitfalls

    NASA Astrophysics Data System (ADS)

    Keskinen, Johanna; Looms, Majken C.; Nielsen, Lars; Klotzsche, Anja; van der Kruk, Jan; Moreau, Julien; Stemmerik, Lars; Holliger, Klaus

    2015-04-01

    Chalk is an important reservoir rock for hydrocarbons and for groundwater resources for many major cities. Therefore, this rock type has been extensively investigated using both geological and geophysical methods. Many applications of crosshole GPR tomography rely on the ray approximation and corresponding inversions of first break traveltimes and/or maximum first-cycle amplitudes. Due to the inherent limitations associated with such approaches, the resulting models tend to be overly smooth and cannot adequately capture the small-scale heterogeneities. In contrast, the full-waveform inversion uses all the information contained in the data and is able to provide significantly improved images. Here, we apply full-waveform inversion to crosshole GPR data to image strong heterogeneity of the chalk related to changes in lithology and porosity. We have collected a crosshole tomography dataset in an old chalk quarry in Eastern Denmark. Based on core data (including plug samples and televiewer logging data) collected in our four ~15-m-deep boreholes and results from previous related studies, it is apparent that the studied chalk is strongly heterogeneous. The upper ~7 m consist of variable coarse-grained chalk layers with numerous flint nodules. The lower half of the studied section appears to be finer-grained and contains less flint. However, still significant porosity variations are also detected in the lower half. In general, the water-saturated (watertable depth ~2 m) chalk is characterized by high porosities, and thus low velocities and high attenuation, while the flint is essentially non-porous and has correspondingly high velocities and low attenuation. Together these characteristics form a strongly heterogeneous medium, which is challenging for the full-waveform inversion to recover. Here, we address the importance of (i) adequate starting models, both in terms of the dielectric permittivity and the electrical conductivity, (ii) the estimation of the source wavelet, (iii) and the effects of data sampling density when imaging this rock type. Moreover, we discuss the resolution of the bedding recovered by the full-waveform approach. Our results show that with proper estimates of the above-mentioned prior parameters, crosshole GPR full-waveform tomography provides high-resolution images capturing a high degree of variability that standard methods cannot resolve in chalk. This in turn makes crosshole full-waveform inversion a promising tool to support time-lapse flow modelling.

  5. Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing

    NASA Astrophysics Data System (ADS)

    Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric

    2017-07-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high level of uncertainty occurs more on areas with high slope and high vegetation. This study provides an alternative and innovative approach for waveform processing that will benefit high fidelity processing of waveform LiDAR data to characterize vegetation structures.

  6. On the Satisfaction of Modulus and Ambiguity Function Constraints in Radar Waveform Optimization for Detection

    DTIC Science & Technology

    2010-06-01

    sense that the two waveforms are as close as possible in a Euclidean sense . Li et al. [33] later devised an algorithm that provides the optimal waveform...respectively), and the SWORD algorithm in [33]. These algorithms were designed for the problem of detecting a known signal in the presence of wide- sense ... sensing , astronomy, crystallography, signal processing, and image processing. (See references in the works cited below for examples.) In the general

  7. Investigation on magnetoacoustic signal generation with magnetic induction and its application to electrical conductivity reconstruction.

    PubMed

    Ma, Qingyu; He, Bin

    2007-08-21

    A theoretical study on the magnetoacoustic signal generation with magnetic induction and its applications to electrical conductivity reconstruction is conducted. An object with a concentric cylindrical geometry is located in a static magnetic field and a pulsed magnetic field. Driven by Lorentz force generated by the static magnetic field, the magnetically induced eddy current produces acoustic vibration and the propagated sound wave is received by a transducer around the object to reconstruct the corresponding electrical conductivity distribution of the object. A theory on the magnetoacoustic waveform generation for a circular symmetric model is provided as a forward problem. The explicit formulae and quantitative algorithm for the electrical conductivity reconstruction are then presented as an inverse problem. Computer simulations were conducted to test the proposed theory and assess the performance of the inverse algorithms for a multi-layer cylindrical model. The present simulation results confirm the validity of the proposed theory and suggest the feasibility of reconstructing electrical conductivity distribution based on the proposed theory on the magnetoacoustic signal generation with magnetic induction.

  8. Comparison of seismic waveform inversion results for the rupture history of a finite fault: application to the 1986 North Palm Springs, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.

    1989-01-01

    The July 8, 1986, North Palm Strings earthquake is used as a basis for comparison of several different approaches to the solution for the rupture history of a finite fault. The inversion of different waveform data is considered; both teleseismic P waveforms and local strong ground motion records. Linear parametrizations for slip amplitude are compared with nonlinear parametrizations for both slip amplitude and rupture time. Inversions using both synthetic and empirical Green's functions are considered. In general, accurate Green's functions are more readily calculable for the teleseismic problem where simple ray theory and flat-layered velocity structures are usually sufficient. However, uncertainties in the variation in t* with frequency most limit the resolution of teleseismic inversions. A set of empirical Green's functions that are well recorded at teleseismic distances could avoid the uncertainties in attenuation. In the inversion of strong motion data, the accurate calculation of propagation path effects other than attenuation effects is the limiting factor in the resolution of source parameters. -from Author

  9. Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.

    2017-12-01

    We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.

  10. Velocity and Attenuation Structure of the Earth's Inner Core Boundary From Semi-Automatic Waveform Modeling

    NASA Astrophysics Data System (ADS)

    Jin, J.; Song, X.; Sun, D.; Helmberger, D. V.

    2013-12-01

    The structure of the Earth's inner core boundary (ICB) is complex. Hemispherical differences and local variations of velocity and attenuation structures, as well as the ICB topography have been reported in previous studies. We are using an automatic waveform modeling method to improve the resolution of the ICB structures. The full waveforms of triplicated PKP phases at distance ranges from 120 to 165 degrees are used to model the lowermost 200 km of the outer core and the uppermost 600km of the inner core. Given a 1D velocity and attenuation model, synthetic seismograms are generated by Generalized Ray Theory. We are also experimenting 2D synthetic methods (WKM, AXISEM, and 2D FD) for 2D models (in the mantle and the inner core). The source time function is determined by observed seismic data. We use neighborhood algorithm to search for a group of models that minimize the misfit between predictions and observations. Tests on synthetic data show the efficiency of this method in resolving detailed velocity and attenuation structures of the ICB simultaneously. We are analyzing seismic record sections at dense arrays along different paths and will report our modeling and inversion results in the meeting.

  11. Source mechanism of small long-period events at Mount St. Helens in July 2005 using template matching, phase-weighted stacking, and full-waveform inversion

    USGS Publications Warehouse

    Matoza, Robin S.; Chouet, Bernard A.; Dawson, Phillip B.; Shearer, Peter M.; Haney, Matthew M.; Waite, Gregory P.; Moran, Seth C.; Mikesell, T. Dylan

    2015-01-01

    Long-period (LP, 0.5-5 Hz) seismicity, observed at volcanoes worldwide, is a recognized signature of unrest and eruption. Cyclic LP “drumbeating” was the characteristic seismicity accompanying the sustained dome-building phase of the 2004–2008 eruption of Mount St. Helens (MSH), WA. However, together with the LP drumbeating was a near-continuous, randomly occurring series of tiny LP seismic events (LP “subevents”), which may hold important additional information on the mechanism of seismogenesis at restless volcanoes. We employ template matching, phase-weighted stacking, and full-waveform inversion to image the source mechanism of one multiplet of these LP subevents at MSH in July 2005. The signal-to-noise ratios of the individual events are too low to produce reliable waveform-inversion results, but the events are repetitive and can be stacked. We apply network-based template matching to 8 days of continuous velocity waveform data from 29 June to 7 July 2005 using a master event to detect 822 network triggers. We stack waveforms for 359 high-quality triggers at each station and component, using a combination of linear and phase-weighted stacking to produce clean stacks for use in waveform inversion. The derived source mechanism pointsto the volumetric oscillation (~10 m3) of a subhorizontal crack located at shallow depth (~30 m) in an area to the south of Crater Glacier in the southern portion of the breached MSH crater. A possible excitation mechanism is the sudden condensation of metastable steam from a shallow pressurized hydrothermal system as it encounters cool meteoric water in the outer parts of the edifice, perhaps supplied from snow melt.

  12. Measuring the misfit between seismograms using an optimal transport distance: application to full waveform inversion

    NASA Astrophysics Data System (ADS)

    Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.

    2016-04-01

    Full waveform inversion using the conventional L2 distance to measure the misfit between seismograms is known to suffer from cycle skipping. An alternative strategy is proposed in this study, based on a measure of the misfit computed with an optimal transport distance. This measure allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion. The computation of this optimal transport distance relies on a particular mathematical formulation allowing for the non-conservation of the total energy between seismograms. The numerical solution of the optimal transport problem is performed using proximal splitting techniques. Three synthetic case studies are investigated using this strategy: the Marmousi 2 model, the BP 2004 salt model, and the Chevron 2014 benchmark data. The results emphasize interesting properties of the optimal transport distance. The associated misfit function is less prone to cycle skipping. A workflow is designed to reconstruct accurately the salt structures in the BP 2004 model, starting from an initial model containing no information about these structures. A high-resolution P-wave velocity estimation is built from the Chevron 2014 benchmark data, following a frequency continuation strategy. This estimation explains accurately the data. Using the same workflow, full waveform inversion based on the L2 distance converges towards a local minimum. These results yield encouraging perspectives regarding the use of the optimal transport distance for full waveform inversion: the sensitivity to the accuracy of the initial model is reduced, the reconstruction of complex salt structure is made possible, the method is robust to noise, and the interpretation of seismic data dominated by reflections is enhanced.

  13. Salvus: A scalable software suite for full-waveform modelling & inversion

    NASA Astrophysics Data System (ADS)

    Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; Fichtner, A.

    2017-12-01

    Full-waveform inversion (FWI), whether at the lab, exploration, or planetary scale, requires the cooperation of five principal components. (1) The geometry of the domain needs to be properly discretized and an initial guess of the model parameters must be projected onto it; (2) Large volumes of recorded waveform data must be collected, organized, and processed; (3) Synthetic waveform data must be efficiently and accurately computed through complex domains; (4) Suitable misfit functions and optimization techniques must be used to relate discrepancies in data space to perturbations in the model; and (5) Some form of workflow management must be employed to schedule and run (1) - (4) in the correct order. Each one of these components can represent a formidable technical challenge which redirects energy from the true task at hand: using FWI to extract new information about some underlying continuum.In this presentation we give an overview of the current status of the Salvus software suite, which was introduced to address the challenges listed above. Specifically, we touch on (1) salvus_mesher, which eases the discretization of complex Earth models into hexahedral meshes; (2) salvus_seismo, which integrates with LASIF and ObsPy to streamline the processing and preparation of seismic data; (3) salvus_wave, a high-performance and scalable spectral-element solver capable of simulating waveforms through general unstructured 2- and 3-D domains, and (4) salvus_opt, an optimization toolbox specifically designed for full-waveform inverse problems. Tying everything together, we also discuss (5) salvus_flow: a workflow package designed to orchestrate and manage the rest of the suite. It is our hope that these developments represent a step towards the automation of large-scale seismic waveform inversion, while also lowering the barrier of entry for new applications. We include several examples of Salvus' use in (extra-) planetary seismology, non-destructive testing, and medical imaging.

  14. Efficient calculation of full waveform time domain inversion for electromagnetic problem using fictitious wave domain method and cascade decimation decomposition

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2016-12-01

    Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.

  15. Waveform inversion of oscillatory signatures in long-period events beneath volcanoes

    USGS Publications Warehouse

    Kumagai, H.; Chouet, B.A.; Nakano, M.

    2002-01-01

    The source mechanism of long-period (LP) events is examined using synthetic waveforms generated by the acoustic resonance of a fluid-filled crack. We perform a series of numerical tests in which the oscillatory signatures of synthetic LP waveforms are used to determine the source time functions of the six moment tensor components from waveform inversions assuming a point source. The results indicate that the moment tensor representation is valid for the odd modes of crack resonance with wavelengths 2L/n, 2W/n, n = 3, 5, 7, ..., where L and W are the crack length and width, respectively. For the even modes with wavelengths 2L/n, 2W/n, n = 2, 4, 6,..., a generalized source representation using higher-order tensors is required, although the efficiency of seismic waves radiated by the even modes is expected to be small. We apply the moment tensor inversion to the oscillatory signatures of an LP event observed at Kusatsu-Shirane Volcano, central Japan. Our results point to the resonance of a subhorizontal crack located a few hundred meters beneath the summit crater lakes. The present approach may be useful to quantify the source location, geometry, and force system of LP events, and opens the way for moment tensor inversions of tremor.

  16. Focal mechanisms and moment magnitudes of micro-earthquakes in central Brazil by waveform inversion with quality assessment and inference of the local stress field

    NASA Astrophysics Data System (ADS)

    Carvalho, Juraci; Barros, Lucas Vieira; Zahradník, Jiří

    2016-11-01

    This paper documents an investigation on the use of full waveform inversion to retrieve focal mechanisms of 11 micro-earthquakes (Mw 0.8 to 1.4). The events represent aftershocks of a 5.0 mb earthquake that occurred on October 8, 2010 close to the city of Mara Rosa in the state of Goiás, Brazil. The main contribution of the work lies in demonstrating the feasibility of waveform inversion of such weak events. The inversion was made possible thanks to recordings available at 8 temporary seismic stations in epicentral distances of less than 8 km, at which waveforms can be successfully modeled at relatively high frequencies (1.5-2.0 Hz). On average, the fault-plane solutions obtained are in agreement with a composite focal mechanism previously calculated from first-motion polarities. They also agree with the fault geometry inferred from precise relocation of the Mara Rosa aftershock sequence. The focal mechanisms provide an estimate of the local stress field. This paper serves as a pilot study for similar investigations in intraplate regions where the stress-field investigations are difficult due to rare earthquake occurrences, and where weak events must be studied with a detailed quality assessment.

  17. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials.

    PubMed

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies.

  18. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials

    PubMed Central

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies. PMID:26325291

  19. Delineation of Rupture Propagation of Large Earthquakes Using Source-Scanning Algorithm: A Control Study

    NASA Astrophysics Data System (ADS)

    Kao, H.; Shan, S.

    2004-12-01

    Determination of the rupture propagation of large earthquakes is important and of wide interest to the seismological research community. The conventional inversion method determines the distribution of slip at a grid of subfaults whose orientations are predefined. As a result, difference choices of fault geometry and dimensions often result in different solutions. In this study, we try to reconstruct the rupture history of an earthquake using the newly developed Source-Scanning Algorithm (SSA) without imposing any a priori constraints on the fault's orientation and dimension. The SSA identifies the distribution of seismic sources in two steps. First, it calculates the theoretical arrival times from all grid points inside the model space to all seismic stations by assuming an origin time. Then, the absolute amplitudes of the observed waveforms at the predicted arrival times are added to give the "brightness" of each time-space pair, and the brightest spots mark the locations of sources. The propagation of the rupture is depicted by the migration of the brightest spots throughout a prescribed time window. A series of experiments are conducted to test the resolution of the SSA inversion. Contrary to the conventional wisdom that seismometers should be placed as close as possible to the fault trace to give the best resolution in delineating rupture details, we found that the best results are obtained if the seismograms are recorded at a distance about half of the total rupture length away from the fault trace. This is especially true when the rupture duration is longer than ~10 s. A possible explanation is that the geometric spreading effects for waveforms from different segments of the rupture are about the same if the stations are sufficiently away from the fault trace, thus giving a uniform resolution to the entire rupture history.

  20. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  1. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  2. Permittivity and conductivity parameter estimations using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.

    2018-04-01

    Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.

  3. A combined surface/volume scattering retracking algorithm for ice sheet satellite altimetry

    NASA Technical Reports Server (NTRS)

    Davis, Curt H.

    1992-01-01

    An algorithm that is based upon a combined surface-volume scattering model is developed. It can be used to retrack individual altimeter waveforms over ice sheets. An iterative least-squares procedure is used to fit the combined model to the return waveforms. The retracking algorithm comprises two distinct sections. The first generates initial model parameter estimates from a filtered altimeter waveform. The second uses the initial estimates, the theoretical model, and the waveform data to generate corrected parameter estimates. This retracking algorithm can be used to assess the accuracy of elevations produced from current retracking algorithms when subsurface volume scattering is present. This is extremely important so that repeated altimeter elevation measurements can be used to accurately detect changes in the mass balance of the ice sheets. By analyzing the distribution of the model parameters over large portions of the ice sheet, regional and seasonal variations in the near-surface properties of the snowpack can be quantified.

  4. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  5. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.

  6. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    NASA Astrophysics Data System (ADS)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  7. Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Zhuang, Wei

    Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.

  8. A High-Resolution View of Global Seismicity

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.

    2014-12-01

    We present high-precision earthquake relocation results from our global-scale re-analysis of the combined seismic archives of parametric data for the years 1964 to present from the International Seismological Centre (ISC), the USGS's Earthquake Data Report (EDR), and selected waveform data from IRIS. We employed iterative, multistep relocation procedures that initially correct for large location errors present in standard global earthquake catalogs, followed by a simultaneous inversion of delay times formed from regional and teleseismic arrival times of first and later arriving phases. An efficient multi-scale double-difference (DD) algorithm is used to solve for relative event locations to the precision of a few km or less, while incorporating information on absolute hypocenter locations from catalogs such as EHB and GEM. We run the computations on both a 40-core cluster geared towards HTC problems (data processing) and a 500-core HPC cluster for data inversion. Currently, we are incorporating waveform correlation delay time measurements available for events in selected regions, but are continuously building up a comprehensive, global correlation database for densely distributed events recorded at stations with a long history of high-quality waveforms. The current global DD catalog includes nearly one million earthquakes, equivalent to approximately 70% of the number of events in the ISC/EDR catalogs initially selected for relocation. The relocations sharpen the view of seismicity in most active regions around the world, in particular along subduction zones where event density is high, but also along mid-ocean ridges where existing hypocenters are especially poorly located. The new data offers the opportunity to investigate earthquake processes and fault structures along entire plate boundaries at the ~km scale, and provides a common framework that facilitates analysis and comparisons of findings across different plate boundary systems.

  9. Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data

    NASA Astrophysics Data System (ADS)

    Lee, Sanggyun; Kim, Hyun-cheol; Im, Jungho

    2018-05-01

    We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m) MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0), as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011-2016, excluding the summer season (i.e., June to September). We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.

  10. Reflection full-waveform inversion using a modified phase misfit function

    NASA Astrophysics Data System (ADS)

    Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe

    2017-09-01

    Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.

  11. Waveform inversion for 3-D earth structure using the Direct Solution Method implemented on vector-parallel supercomputer

    NASA Astrophysics Data System (ADS)

    Hara, Tatsuhiko

    2004-08-01

    We implement the Direct Solution Method (DSM) on a vector-parallel supercomputer and show that it is possible to significantly improve its computational efficiency through parallel computing. We apply the parallel DSM calculation to waveform inversion of long period (250-500 s) surface wave data for three-dimensional (3-D) S-wave velocity structure in the upper and uppermost lower mantle. We use a spherical harmonic expansion to represent lateral variation with the maximum angular degree 16. We find significant low velocities under south Pacific hot spots in the transition zone. This is consistent with other seismological studies conducted in the Superplume project, which suggests deep roots of these hot spots. We also perform simultaneous waveform inversion for 3-D S-wave velocity and Q structure. Since resolution for Q is not good, we develop a new technique in which power spectra are used as data for inversion. We find good correlation between long wavelength patterns of Vs and Q in the transition zone such as high Vs and high Q under the western Pacific.

  12. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  13. Golay Complementary Waveforms in Reed–Müller Sequences for Radar Detection of Nonzero Doppler Targets

    PubMed Central

    Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill

    2018-01-01

    Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708

  14. Localized time-lapse elastic waveform inversion using wavefield injection and extrapolation: 2-D parametric studies

    NASA Astrophysics Data System (ADS)

    Yuan, Shihao; Fuji, Nobuaki; Singh, Satish; Borisov, Dmitry

    2017-06-01

    We present a methodology to invert seismic data for a localized area by combining source-side wavefield injection and receiver-side extrapolation method. Despite the high resolving power of seismic full waveform inversion, the computational cost for practical scale elastic or viscoelastic waveform inversion remains a heavy burden. This can be much more severe for time-lapse surveys, which require real-time seismic imaging on a daily or weekly basis. Besides, changes of the structure during time-lapse surveys are likely to occur in a small area rather than the whole region of seismic experiments, such as oil and gas reservoir or CO2 injection wells. We thus propose an approach that allows to image effectively and quantitatively the localized structure changes far deep from both source and receiver arrays. In our method, we perform both forward and back propagation only inside the target region. First, we look for the equivalent source expression enclosing the region of interest by using the wavefield injection method. Second, we extrapolate wavefield from physical receivers located near the Earth's surface or on the ocean bottom to an array of virtual receivers in the subsurface by using correlation-type representation theorem. In this study, we present various 2-D elastic numerical examples of the proposed method and quantitatively evaluate errors in obtained models, in comparison to those of conventional full-model inversions. The results show that the proposed localized waveform inversion is not only efficient and robust but also accurate even under the existence of errors in both initial models and observed data.

  15. Velocity structure of a bottom simulating reflector offshore Peru: Results from full waveform inversion

    USGS Publications Warehouse

    Pecher, I.A.; Minshull, T.A.; Singh, S.C.; von Huene, Roland E.

    1996-01-01

    Much of our knowledge of the worldwide distribution of submarine gas hydrates comes from seismic observations of Bottom Simulating Reflectors (BSRs). Full waveform inversion has proven to be a reliable technique for studying the fine structure of BSRs using the compressional wave velocity. We applied a non-linear full waveform inversion technique to a BSR at a location offshore Peru. We first determined the large-scale features of seismic velocity variations using a statistical inversion technique to maximise coherent energy along travel-time curves. These velocities were used for a starting velocity model for the full waveform inversion, which yielded a detailed velocity/depth model in the vicinity of the BSR. We found that the data are best fit by a model in which the BSR consists of a thin, low-velocity layer. The compressional wave velocity drops from 2.15 km/s down to an average of 1.70 km/s in an 18m thick interval, with a minimum velocity of 1.62 km/s in a 6 m interval. The resulting compressional wave velocity was used to estimate gas content in the sediments. Our results suggest that the low velocity layer is a 6-18 m thick zone containing a few percent of free gas in the pore space. The presence of the BSR coincides with a region of vertical uplift. Therefore, we suggest that gas at this BSR is formed by a dissociation of hydrates at the base of the hydrate stability zone due to uplift and subsequently a decrease in pressure.

  16. A long source area of the 1906 Colombia-Ecuador earthquake estimated from observed tsunami waveforms

    NASA Astrophysics Data System (ADS)

    Yamanaka, Yusuke; Tanioka, Yuichiro; Shiina, Takahiro

    2017-12-01

    The 1906 Colombia-Ecuador earthquake induced both strong seismic motions and a tsunami, the most destructive earthquake in the history of the Colombia-Ecuador subduction zone. The tsunami propagated across the Pacific Ocean, and its waveforms were observed at tide gauge stations in countries including Panama, Japan, and the USA. This study conducted slip inverse analysis for the 1906 earthquake using these waveforms. A digital dataset of observed tsunami waveforms at the Naos Island (Panama) and Honolulu (USA) tide gauge stations, where the tsunami was clearly observed, was first produced by consulting documents. Next, the two waveforms were applied in an inverse analysis as the target waveform. The results of this analysis indicated that the moment magnitude of the 1906 earthquake ranged from 8.3 to 8.6. Moreover, the dominant slip occurred in the northern part of the assumed source region near the coast of Colombia, where little significant seismicity has occurred, rather than in the southern part. The results also indicated that the source area, with significant slip, covered a long distance, including the southern, central, and northern parts of the region.[Figure not available: see fulltext.

  17. Moment tensor inversions using strong motion waveforms of Taiwan TSMIP data, 1993–2009

    USGS Publications Warehouse

    Chang, Kaiwen; Chi, Wu-Cheng; Gung, Yuancheng; Dreger, Douglas; Lee, William H K.; Chiu, Hung-Chie

    2011-01-01

    Earthquake source parameters are important for earthquake studies and seismic hazard assessment. Moment tensors are among the most important earthquake source parameters, and are now routinely derived using modern broadband seismic networks around the world. Similar waveform inversion techniques can also apply to other available data, including strong-motion seismograms. Strong-motion waveforms are also broadband, and recorded in many regions since the 1980s. Thus, strong-motion data can be used to augment moment tensor catalogs with a much larger dataset than that available from the high-gain, broadband seismic networks. However, a systematic comparison between the moment tensors derived from strong motion waveforms and high-gain broadband waveforms has not been available. In this study, we inverted the source mechanisms of Taiwan earthquakes between 1993 and 2009 by using the regional moment tensor inversion method using digital data from several hundred stations in the Taiwan Strong Motion Instrumentation Program (TSMIP). By testing different velocity models and filter passbands, we were able to successfully derive moment tensor solutions for 107 earthquakes of Mw >= 4.8. The solutions for large events agree well with other available moment tensor catalogs derived from local and global broadband networks. However, for Mw = 5.0 or smaller events, we consistently over estimated the moment magnitudes by 0.5 to 1.0. We have tested accelerograms, and velocity waveforms integrated from accelerograms for the inversions, and found the results are similar. In addition, we used part of the catalogs to study important seismogenic structures in the area near Meishan Taiwan which was the site of a very damaging earthquake a century ago, and found that the structures were dominated by events with complex right-lateral strike-slip faulting during the recent decade. The procedures developed from this study may be applied to other strong-motion datasets to compliment or fill gaps in catalogs from regional broadband networks and teleseismic networks.

  18. Nonlinear 1D and 2D waveform inversions of SS precursors and their applications in mantle seismic imaging

    NASA Astrophysics Data System (ADS)

    Dokht, R.; Gu, Y. J.; Sacchi, M. D.

    2016-12-01

    Seismic velocities and the topography of mantle discontinuities are crucial for the understanding of mantle structure, dynamics and mineralogy. While these two observables are closely linked, the vast majority of high-resolution seismic images are retrieved under the assumption of horizontally stratified mantle interfaces. This conventional correction-based process could lead to considerable errors due to the inherent trade-off between velocity and discontinuity depth. In this study, we introduce a nonlinear joint waveform inversion method that simultaneously recovers discontinuity depths and seismic velocities using the waveforms of SS precursors. Our target region is the upper mantle and transition zone beneath Northeast Asia. In this region, the inversion outcomes clearly delineate a westward dipping high-velocity structure in association with the subducting Pacific plate. Above the flat part of the slab west of the Japan sea, our results show a shear wave velocity reduction of 1.5% in the upper mantle and 10-15 km depression of the 410 km discontinuity beneath the Changbaishan volcanic field. We also identify the maximum correlation between shear velocity and transition zone thickness at an approximate slab dip of 30 degrees, which is consistent with previously reported values in this region.To validate the results of the 1D waveform inversion of SS precursors, we discretize the mantle beneath the study region and conduct a 2D waveform tomographic survey using the same nonlinear approach. The problem is simplified by adopting the discontinuity depths from the 1D inversion and solving only for perturbations in shear velocities. The resulting models obtained from the 1D and 2D approaches are self-consistent. Low-velocities beneath the Changbai intraplate volcano likely persist to a depth of 500 km. Collectively, our seismic observations suggest that the active volcanoes in eastern China may be fueled by a hot thermal anomaly originating from the mantle transition zone.

  19. Source mechanism of long-period events at Kusatsu-Shirane Volcano, Japan, inferred from waveform inversion of the effective excitation functions

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.A.

    2003-01-01

    We investigate the source mechanism of long-period (LP) events observed at Kusatsu-Shirane Volcano, Japan, based on waveform inversions of their effective excitation functions. The effective excitation function, which represents the apparent excitation observed at individual receivers, is estimated by applying an autoregressive filter to the LP waveform. Assuming a point source, we apply this method to seven LP events the waveforms of which are characterized by simple decaying and nearly monochromatic oscillations with frequency in the range 1-3 Hz. The results of the waveform inversions show dominant volumetric change components accompanied by single force components, common to all the events analyzed, and suggesting a repeated activation of a sub-horizontal crack located 300 m beneath the summit crater lakes. Based on these results, we propose a model of the source process of LP seismicity, in which a gradual buildup of steam pressure in a hydrothermal crack in response to magmatic heat causes repeated discharges of steam from the crack. The rapid discharge of fluid causes the collapse of the fluid-filled crack and excites acoustic oscillations of the crack, which produce the characteristic waveforms observed in the LP events. The presence of a single force synchronous with the collapse of the crack is interpreted as the release of gravitational energy that occurs as the slug of steam ejected from the crack ascends toward the surface and is replaced by cooler water flowing downward in a fluid-filled conduit linking the crack and the base of the crater lake. ?? 2003 Elsevier Science B.V. All rights reserved.

  20. Detection of sinkholes or anomalies using full seismic wave fields.

    DOT National Transportation Integrated Search

    2013-04-01

    This research presents an application of two-dimensional (2-D) time-domain waveform tomography for detection of embedded sinkholes and anomalies. The measured seismic surface wave fields were inverted using a full waveform inversion (FWI) technique, ...

  1. Resolution analysis of marine seismic full waveform data by Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Ray, A.; Sekar, A.; Hoversten, G. M.; Albertin, U.

    2015-12-01

    The Bayesian posterior density function (PDF) of earth models that fit full waveform seismic data convey information on the uncertainty with which the elastic model parameters are resolved. In this work, we apply the trans-dimensional reversible jump Markov Chain Monte Carlo method (RJ-MCMC) for the 1D inversion of noisy synthetic full-waveform seismic data in the frequency-wavenumber domain. While seismic full waveform inversion (FWI) is a powerful method for characterizing subsurface elastic parameters, the uncertainty in the inverted models has remained poorly known, if at all and is highly initial model dependent. The Bayesian method we use is trans-dimensional in that the number of model layers is not fixed, and flexible such that the layer boundaries are free to move around. The resulting parameterization does not require regularization to stabilize the inversion. Depth resolution is traded off with the number of layers, providing an estimate of uncertainty in elastic parameters (compressional and shear velocities Vp and Vs as well as density) with depth. We find that in the absence of additional constraints, Bayesian inversion can result in a wide range of posterior PDFs on Vp, Vs and density. These PDFs range from being clustered around the true model, to those that contain little resolution of any particular features other than those in the near surface, depending on the particular data and target geometry. We present results for a suite of different frequencies and offset ranges, examining the differences in the posterior model densities thus derived. Though these results are for a 1D earth, they are applicable to areas with simple, layered geology and provide valuable insight into the resolving capabilities of FWI, as well as highlight the challenges in solving a highly non-linear problem. The RJ-MCMC method also presents a tantalizing possibility for extension to 2D and 3D Bayesian inversion of full waveform seismic data in the future, as it objectively tackles the problem of model selection (i.e., the number of layers or cells for parameterization), which could ease the computational burden of evaluating forward models with many parameters.

  2. Waveform LiDAR processing: comparison of classic approaches and optimized Gold deconvolution to characterize vegetation structure and terrain elevation

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.

    2016-12-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.

  3. Use of the Kalman Filter for Aortic Pressure Waveform Noise Reduction

    PubMed Central

    Lu, Hsiang-Wei; Wu, Chung-Che; Aliyazicioglu, Zekeriya; Kang, James S.

    2017-01-01

    Clinical applications that require extraction and interpretation of physiological signals or waveforms are susceptible to corruption by noise or artifacts. Real-time hemodynamic monitoring systems are important for clinicians to assess the hemodynamic stability of surgical or intensive care patients by interpreting hemodynamic parameters generated by an analysis of aortic blood pressure (ABP) waveform measurements. Since hemodynamic parameter estimation algorithms often detect events and features from measured ABP waveforms to generate hemodynamic parameters, noise and artifacts integrated into ABP waveforms can severely distort the interpretation of hemodynamic parameters by hemodynamic algorithms. In this article, we propose the use of the Kalman filter and the 4-element Windkessel model with static parameters, arterial compliance C, peripheral resistance R, aortic impedance r, and the inertia of blood L, to represent aortic circulation for generating accurate estimations of ABP waveforms through noise and artifact reduction. Results show the Kalman filter could very effectively eliminate noise and generate a good estimation from the noisy ABP waveform based on the past state history. The power spectrum of the measured ABP waveform and the synthesized ABP waveform shows two similar harmonic frequencies. PMID:28611850

  4. Inference of multi-Gaussian property fields by probabilistic inversion of crosshole ground penetrating radar data using an improved dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Hunziker, Jürg; Laloy, Eric; Linde, Niklas

    2016-04-01

    Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present an improved formulation of the dimensionality reduction, and numerically show how it reduces artifacts in the generated models and provides better posterior estimation of the subsurface geostatistical structure. We next show that the results of the method compare very favorably against previous deterministic and stochastic inversion results obtained at the South Oyster Bacterial Transport Site in Virginia, USA. The long-term goal of this work is to enable MCMC-based full waveform inversion of crosshole GPR data.

  5. An accurate and computationally efficient algorithm for ground peak identification in large footprint waveform LiDAR data

    NASA Astrophysics Data System (ADS)

    Zhuang, Wei; Mountrakis, Giorgos

    2014-09-01

    Large footprint waveform LiDAR sensors have been widely used for numerous airborne studies. Ground peak identification in a large footprint waveform is a significant bottleneck in exploring full usage of the waveform datasets. In the current study, an accurate and computationally efficient algorithm was developed for ground peak identification, called Filtering and Clustering Algorithm (FICA). The method was evaluated on Land, Vegetation, and Ice Sensor (LVIS) waveform datasets acquired over Central NY. FICA incorporates a set of multi-scale second derivative filters and a k-means clustering algorithm in order to avoid detecting false ground peaks. FICA was tested in five different land cover types (deciduous trees, coniferous trees, shrub, grass and developed area) and showed more accurate results when compared to existing algorithms. More specifically, compared with Gaussian decomposition, the RMSE ground peak identification by FICA was 2.82 m (5.29 m for GD) in deciduous plots, 3.25 m (4.57 m for GD) in coniferous plots, 2.63 m (2.83 m for GD) in shrub plots, 0.82 m (0.93 m for GD) in grass plots, and 0.70 m (0.51 m for GD) in plots of developed areas. FICA performance was also relatively consistent under various slope and canopy coverage (CC) conditions. In addition, FICA showed better computational efficiency compared to existing methods. FICA's major computational and accuracy advantage is a result of the adopted multi-scale signal processing procedures that concentrate on local portions of the signal as opposed to the Gaussian decomposition that uses a curve-fitting strategy applied in the entire signal. The FICA algorithm is a good candidate for large-scale implementation on future space-borne waveform LiDAR sensors.

  6. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  7. Lithospheric layering in the North American craton revealed by including Short Period Constraints in Full Waveform Tomography

    NASA Astrophysics Data System (ADS)

    Roy, C.; Calo, M.; Bodin, T.; Romanowicz, B. A.

    2017-12-01

    Recent receiver function studies of the North American craton suggest the presence of significant layering within the cratonic lithosphere, with significant lateral variations in the depth of the velocity discontinuities. These structural boundaries have been confirmed recently using a transdimensional Markov Chain Monte Carlo approach (TMCMC), inverting surface wave dispersion data and converted phases simultaneously (Calò et al., 2016; Roy and Romanowicz 2017). The lateral resolution of upper mantle structure can be improved with a high density of broadband seismic stations, or with a sparse network using full waveform inversion based on numerical wavefield computation methods such as the Spectral Element Method (SEM). However, inverting for discontinuities with strong topography such as MLDS's or LAB, presents challenges in an inversion framework, both computationally, due to the short periods required, and from the point of view of stability of the inversion. To overcome these limitations, and to improve resolution of layering in the upper mantle, we are developing a methodology that combines full waveform inversion tomography and information provided by short period seismic observables. We have extended the 30 1D radially anisotropic shear velocity profiles of Calò et al. 2016 to several other stations, for which we used a recent shear velocity model (Clouzet et al., 2017) as constraint in the modeling. These 1D profiles, including both isotropic and anisotropic discontinuities in the upper mantle (above 300 km depth) are then used to build a 3D starting model for the full waveform tomographic inversion. This model is built after 1) homogenization of the layered 1D models and 2) interpolation between the 1D smooth profiles and the model of Clouzet et al. 2017, resulting in a smooth 3D starting model. Waveforms used in the inversion are filtered at periods longer than 30s. We use the SEM code "RegSEM" for forward computations and a quasi-Newton inversion approach in which kernels are computed using normal mode perturbation theory. The resulting volumetric velocity perturbations around the homogenized starting model are then added to the discontinuous 3D starting model by dehomogenizing the model. We present here the first results of such an approach for refining structure in the North American continent.

  8. Combining high fidelity simulations and real data for improved small-footprint waveform lidar assessment of vegetation structure (Invited)

    NASA Astrophysics Data System (ADS)

    van Aardt, J. A.; Wu, J.; Asner, G. P.

    2010-12-01

    Our understanding of vegetation complexity and biodiversity, from a remote sensing perspective, has evolved from 2D species diversity to also include 3D vegetation structural diversity. Attempts at using image-based approaches for structural assessment have met with reasonable success, but 3D remote sensing technologies, such as radar and light detection and ranging (lidar), are arguably more adept at sensing vegetation structure. While radar-derived structure metrics tend to break down at high biomass levels, novel waveform lidar systems present us with new opportunities for detailed and scalable structural characterization of vegetation. These sensors digitize the entire backscattered energy profile at high spatial and vertical resolutions and often at off-nadir angles. Research teams at Rochester Institute of Technology (RIT) and Carnegie Institution for Science have been using airborne data from the Carnegie Airborne Observatory (CAO) to assess vegetation structure and variation in savanna ecosystems in and around the Kruger National Park, South Africa. It quickly became evident that (i) pre-processing of small-footprint waveform data is a critical step prior to testing scientific hypotheses, (ii) a number of assumptions of how vegetation structure is expressed in these 3D signals need to be evaluated, and very importantly (iii) we need to re-evaluate our linkages between coarse in-field measurements, e.g., volume, biomass, leaf area index (LAI), and metrics derived from waveform lidar. Research has progressed to the stage where we have evaluated various pre-processing steps, e.g., convolution via the Wiener filter, Richardson-Lucy, and non-negative least squares algorithms, and the coupling of waveform voxels to tree structure in a simulation environment. This was done in the MODTRAN-based Digital Imaging and Remote Sensing Image Generation (DIRSIG) simulation environment, developed at RIT. We generated "truth" cross-section datasets of detailed virtual trees in this environment and evaluated inversion approaches to tree structure estimation. Various outgoing pulse widths, tree structures, and a noise component were included as part of the simulation effort. Results, for example, have shown that the Richardson-Lucy algorithm outperforms other approaches in terms of retrieval of known structural information, that our assumption regarding the position of the ground surface needs re-evaluation, and has shed light on herbaceous biomass and waveform interactions and the impact of outgoing pulse width on assessments. These efforts have gone a long way in providing a solid foundation for analysis and interpretation of actual waveform data from the savanna study area. We expect that newfound knowledge with respect to waveform-target interactions from these simulations will also aid efforts to reconstruct 3D trees from real data and better describe associated structural diversity. Results will be presented at the conference.

  9. Receiver function HV ratio: a new measurement for reducing non-uniqueness of receiver function waveform inversion

    NASA Astrophysics Data System (ADS)

    Chong, Jiajun; Chu, Risheng; Ni, Sidao; Meng, Qingjun; Guo, Aizhi

    2018-02-01

    It is known that a receiver function has relatively weak constraint on absolute seismic wave velocity, and that joint inversion of the receiver function with surface wave dispersion has been widely applied to reduce the trade-off of velocity with interface depth. However, some studies indicate that the receiver function itself is capable for determining the absolute shear-wave velocity. In this study, we propose to measure the receiver function HV ratio which takes advantage of the amplitude information of the receiver function to constrain the shear-wave velocity. Numerical analysis indicates that the receiver function HV ratio is sensitive to the average shear-wave velocity in the depth range it samples, and can help to reduce the non-uniqueness of receiver function waveform inversion. A joint inversion scheme has been developed, and both synthetic tests and real data application proved the feasibility of the joint inversion.

  10. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  11. Inversion for slip distribution using teleseismic P waveforms: North Palm Springs, Borah Peak, and Michoacan earthquakes

    USGS Publications Warehouse

    Mendoza, C.; Hartzell, S.H.

    1988-01-01

    We have inverted the teleseismic P waveforms recorded by stations of the Global Digital Seismograph Network for the 8 July 1986 North Palm Springs, California, the 28 October 1983 Borah Peak, Idaho, and the 19 September 1985 Michoacan, Mexico, earthquakes to recover the distribution of slip on each of the faults using a point-by-point inversion method with smoothing and positivity constraints. Results of the inversion indicate that the Global digital Seismograph Network data are useful for deriving fault dislocation models for moderate to large events. However, a wide range of frequencies is necessary to infer the distribution of slip on the earthquake fault. Although the long-period waveforms define the size (dimensions and seismic moment) of the earthquake, data at shorter period provide additional constraints on the variation of slip on the fault. Dislocation models obtained for all three earthquakes are consistent with a heterogeneous rupture process where failure is controlled largely by the size and location of high-strength asperity regions. -from Authors

  12. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  13. Improved source inversion from joint measurements of translational and rotational ground motions

    NASA Astrophysics Data System (ADS)

    Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.

    2017-12-01

    Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.

  14. Accurate estimation of seismic source parameters of induced seismicity by a combined approach of generalized inversion and genetic algorithm: Application to The Geysers geothermal area, California

    NASA Astrophysics Data System (ADS)

    Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.

    2017-05-01

    The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.

  15. Fault Slip Distribution and Optimum Sea Surface Displacement of the 2017 Tehuantepec Earthquake in Mexico (Mw 8.2) Estimated from Tsunami Waveforms

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Satake, K.; Mulia, I. E.

    2017-12-01

    An intraplate normal fault earthquake (Mw 8.2) occurred on 8 September 2017 in the Tehuantepec seismic gap of the Middle America Trench. The submarine earthquake generated a tsunami which was recorded by coastal tide gauges and offshore DART buoys. We used the tsunami waveforms recorded at 16 stations to estimate the fault slip distribution and an optimum sea surface displacement of the earthquake. A steep fault dipping to the northeast with strike of 315°, dip of 73°and rake of -96° based on the USGS W-phase moment tensor solution was assumed for the slip inversion. To independently estimate the sea surface displacement without assuming earthquake fault parameters, we used the B-spline function for the unit sources. The distribution of the unit sources was optimized by a Genetic Algorithm - Pattern Search (GA-PS) method. Tsunami waveform inversion resolves a spatially compact region of large slip (4-10 m) with a dimension of 100 km along the strike and 80 km along the dip in the depth range between 40 km and 110 km. The seismic moment calculated from the fault slip distribution with assumed rigidity of 6 × 1010 Nm-2 is 2.46 × 1021 Nm (Mw 8.2). The optimum displacement model suggests that the sea surface was uplifted up to 0.5 m and subsided down to -0.8 m. The deep location of large fault slip may be the cause of such small sea surface displacements. The simulated tsunami waveforms from the optimum sea surface displacement can reproduce the observations better than those from fault slip distribution; the normalized root mean square misfit for the sea surface displacement is 0.89, while that for the fault slip distribution is 1.04. We simulated the tsunami propagation using the optimum sea surface displacement model. Large tsunami amplitudes up to 2.5 m were predicted to occur inside and around a lagoon located between Salina Cruz and Puerto Chiapas. Figure 1. a) Sea surface displacement for the 2017 Tehuantepec earthquake estimated by tsunami waveforms. b) Map of simulated maximum tsunami amplitude and comparison between observed (blue circles) and simulated (red circles) tsunami maximum amplitude along the coast.

  16. High resolution aquifer characterization using crosshole GPR full-waveform tomography

    NASA Astrophysics Data System (ADS)

    Gueting, N.; Vienken, T.; Klotzsche, A.; Van Der Kruk, J.; Vanderborght, J.; Caers, J.; Vereecken, H.; Englert, A.

    2016-12-01

    Limited knowledge about the spatial distribution of aquifer properties typically constrains our ability to predict subsurface flow and transport. Here, we investigate the value of using high resolution full-waveform inversion of cross-borehole ground penetrating radar (GPR) data for aquifer characterization. By stitching together GPR tomograms from multiple adjacent crosshole planes, we are able to image, with a decimeter scale resolution, the dielectric permittivity and electrical conductivity of an alluvial aquifer along cross-sections of 50 m length and 10 m depth. A logistic regression model is employed to predict the spatial distribution of lithological facies on the basis of the GPR results. Vertical profiles of porosity and hydraulic conductivity from direct-push, flowmeter and grain size data suggest that the GPR predicted facies classification is meaningful with regard to porosity and hydraulic conductivity, even though the distributions of individual facies show some overlap and the absolute hydraulic conductivities from the different methods (direct-push, flowmeter, grain size) differ up to approximately one order of magnitude. Comparison of the GPR predicted facies architecture with tracer test data suggests that the plume splitting observed in a tracer experiment was caused by a hydraulically low-conductive sand layer with a thickness of only a few decimeters. Because this sand layer is identified by GPR full-waveform inversion but not by conventional GPR ray-based inversion we conclude that the improvement in spatial resolution due to full-waveform inversion is crucial to detect small-scale aquifer structures that are highly relevant for solute transport.

  17. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  18. Seismic waveform classification using deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Q.; Allen, R. M.

    2017-12-01

    MyShake is a global smartphone seismic network that harnesses the power of crowdsourcing. It has an Artificial Neural Network (ANN) algorithm running on the phone to distinguish earthquake motion from human activities recorded by the accelerometer on board. Once the ANN detects earthquake-like motion, it sends a 5-min chunk of acceleration data back to the server for further analysis. The time-series data collected contains both earthquake data and human activity data that the ANN confused. In this presentation, we will show the Convolutional Neural Network (CNN) we built under the umbrella of supervised learning to find out the earthquake waveform. The waveforms of the recorded motion could treat easily as images, and by taking the advantage of the power of CNN processing the images, we achieved very high successful rate to select the earthquake waveforms out. Since there are many non-earthquake waveforms than the earthquake waveforms, we also built an anomaly detection algorithm using the CNN. Both these two methods can be easily extended to other waveform classification problems.

  19. Towards a Full Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2015-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.

  20. Duration of Tsunami Generation Longer than Duration of Seismic Wave Generation in the 2011 Mw 9.0 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Fujihara, S.; Korenaga, M.; Kawaji, K.; Akiyama, S.

    2013-12-01

    We try to compare and evaluate the nature of tsunami generation and seismic wave generation in occurrence of the 2011 Tohoku-Oki earthquake (hereafter, called as TOH11), in terms of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms. Since 1970's, the nature of "tsunami earthquakes" has been discussed in many researches (e.g. Kanamori, 1972; Kanamori and Kikuchi, 1993; Kikuchi and Kanamori, 1995; Ide et al., 1993; Satake, 1994) mostly based on analysis of seismic waveform data , in terms of the "slow" nature of tsunami earthquakes (e.g., the 1992 Nicaragura earthquake). Although TOH11 is not necessarily understood as a tsunami earthquake, TOH11 is one of historical earthquakes that simultaneously generated large seismic waves and tsunami. Also, TOH11 is one of earthquakes which was observed both by seismic observation network and tsunami observation network around the Japanese islands. Therefore, for the purpose of analyzing the nature of tsunami generation, we try to utilize tsunami waveform data as much as possible. In our previous studies of TOH11 (Fujihara et al., 2012a; Fujihara et al., 2012b), we inverted tsunami waveforms at GPS wave gauges of NOWPHAS to image the spatio-temporal slip distribution. The "temporal" nature of our tsunami source model is generally consistent with the other tsunami source models (e.g., Satake et al, 2013). For seismic waveform inversion based on 1-D structure, here we inverted broadband seismograms at GSN stations based on the teleseismic body-wave inversion scheme (Kikuchi and Kanamori, 2003). Also, for seismic waveform inversion considering the inhomogeneous internal structure, we inverted strong motion seismograms at K-NET and KiK-net stations, based on 3-D Green's functions (Fujihara et al., 2013a; Fujihara et al., 2013b). The gross "temporal" nature of our seismic source models are generally consistent with the other seismic source models (e.g., Yoshida et al., 2011; Ide at al., 2011; Yagi and Fukahata, 2011; Suzuki et al., 2011). The comparison of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms, suggested that there was the time period common to both seismic wave generation and tsunami generation followed by the time period unique to tsunami generation. At this point, we think that comparison of the absolute values of moment rates is not so meaningful between tsunami waveform inversion and seismic waveform inversion, because of general ambiguity of rigidity values of each subfault in the fault region (assuming the rigidity value of 30 GPa of Yoshida et al (2011)). Considering this, the normalized value of moment rate function was also evaluated and it does not change the general feature of two moment rate functions in terms of duration property. Furthermore, the results suggested that tsunami generation process apparently took more time than seismic wave generation process did. Tsunami can be generated even by "extra" motions resulting from many suggested abnormal mechanisms. These extra motions may be attribute to the relatively larger-scale tsunami generation than expected from the magnitude level from seismic ground motion, and attribute to the longer duration of tsunami generation process.

  1. K-mean clustering algorithm for processing signals from compound semiconductor detectors

    NASA Astrophysics Data System (ADS)

    Tada, Tsutomu; Hitomi, Keitaro; Wu, Yan; Kim, Seong-Yun; Yamazaki, Hiromichi; Ishii, Keizo

    2011-12-01

    The K-mean clustering algorithm was employed for processing signal waveforms from TlBr detectors. The signal waveforms were classified based on its shape reflecting the charge collection process in the detector. The classified signal waveforms were processed individually to suppress the pulse height variation of signals due to the charge collection loss. The obtained energy resolution of a 137Cs spectrum measured with a 0.5 mm thick TlBr detector was 1.3% FWHM by employing 500 clusters.

  2. Towards full waveform ambient noise inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas

    2018-01-01

    In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure are quantified using Hessian-vector products.

  3. Interparameter trade-off quantification and reduction in isotropic-elastic full-waveform inversion: synthetic experiments and Hussar land data set application

    NASA Astrophysics Data System (ADS)

    Pan, Wenyong; Geng, Yu; Innanen, Kristopher A.

    2018-05-01

    The problem of inverting for multiple physical parameters in the subsurface using seismic full-waveform inversion (FWI) is complicated by interparameter trade-off arising from inherent ambiguities between different physical parameters. Parameter resolution is often characterized using scattering radiation patterns, but these neglect some important aspects of interparameter trade-off. More general analysis and mitigation of interparameter trade-off in isotropic-elastic FWI is possible through judiciously chosen multiparameter Hessian matrix-vector products. We show that products of multiparameter Hessian off-diagonal blocks with model perturbation vectors, referred to as interparameter contamination kernels, are central to the approach. We apply the multiparameter Hessian to various vectors designed to provide information regarding the strengths and characteristics of interparameter contamination, both locally and within the whole volume. With numerical experiments, we observe that S-wave velocity perturbations introduce strong contaminations into density and phase-reversed contaminations into P-wave velocity, but themselves experience only limited contaminations from other parameters. Based on these findings, we introduce a novel strategy to mitigate the influence of interparameter trade-off with approximate contamination kernels. Furthermore, we recommend that the local spatial and interparameter trade-off of the inverted models be quantified using extended multiparameter point spread functions (EMPSFs) obtained with pre-conditioned conjugate-gradient algorithm. Compared to traditional point spread functions, the EMPSFs appear to provide more accurate measurements for resolution analysis, by de-blurring the estimations, scaling magnitudes and mitigating interparameter contamination. Approximate eigenvalue volumes constructed with stochastic probing approach are proposed to evaluate the resolution of the inverted models within the whole model. With a synthetic Marmousi model example and a land seismic field data set from Hussar, Alberta, Canada, we confirm that the new inversion strategy suppresses the interparameter contamination effectively and provides more reliable density estimations in isotropic-elastic FWI as compared to standard simultaneous inversion approach.

  4. Full waveform inversion using envelope-based global correlation norm

    NASA Astrophysics Data System (ADS)

    Oh, Ju-Won; Alkhalifah, Tariq

    2018-05-01

    To increase the feasibility of full waveform inversion on real data, we suggest a new objective function, which is defined as the global correlation of the envelopes of modelled and observed data. The envelope-based global correlation norm has the advantage of the envelope inversion that generates artificial low-frequency information, which provides the possibility to recover long-wavelength structure in an early stage. In addition, the envelope-based global correlation norm maintains the advantage of the global correlation norm, which reduces the sensitivity of the misfit to amplitude errors so that the performance of inversion on real data can be enhanced when the exact source wavelet is not available and more complex physics are ignored. Through the synthetic example for 2-D SEG/EAGE overthrust model with inaccurate source wavelet, we compare the performance of four different approaches, which are the least-squares waveform inversion, least-squares envelope inversion, global correlation norm and envelope-based global correlation norm. Finally, we apply the envelope-based global correlation norm on the 3-D Ocean Bottom Cable (OBC) data from the North Sea. The envelope-based global correlation norm captures the strong reflections from the high-velocity caprock and generates artificial low-frequency reflection energy that helps us recover long-wavelength structure of the model domain in the early stages. From this long-wavelength model, the conventional global correlation norm is sequentially applied to invert for higher-resolution features of the model.

  5. Rapid kinematic finite source inversion for Tsunamic Early Warning using high rate GNSS data

    NASA Astrophysics Data System (ADS)

    Chen, K.; Liu, Z.; Song, Y. T.

    2017-12-01

    Recently, Global Navigation Satellite System (GNSS) has been used for rapid earthquake source inversion towards tsunami early warning. In practice, two approaches, i.e., static finite source inversion based on permanent co-seismic offsets and kinematic finite source inversion using high-rate (>= 1 Hz) co-seismic displacement waveforms, are often employed to fulfill the task. The static inversion is relatively easy to be implemented and does not require additional constraints on rupture velocity, duration, and temporal variation. However, since most GNSS receivers are deployed onshore locating on one side of the subduction fault, there is very limited resolution on near-trench fault slip using GNSS in static finite source inversion. On the other hand, the high-rate GNSS displacement waveforms, which contain the timing information of earthquake rupture explicitly and static offsets implicitly, have the potential to improve near-trench resolution by reconciling with the depth-dependent megathrust rupture behaviors. In this contribution, we assess the performance of rapid kinematic finite source inversion using high-rate GNSS by three selected historical tsunamigenic cases: the 2010 Mentawai, 2011 Tohoku and 2015 Illapel events. With respect to the 2010 Mentawai case, it is a typical tsunami earthquake with most slip concentrating near the trench. The static inversion has little resolution there and incorrectly puts slip at greater depth (>10km). In contrast, the recorded GNSS displacement waveforms are deficit in high-frequency energy, the kinematic source inversion recovers a shallow slip patch (depth less than 6 km) and tsunami runups are predicted quite reasonably. For the other two events, slip from kinematic and static inversion show similar characteristics and comparable tsunami scenarios, which may be related to dense GNSS network and behavior of the rupture. Acknowledging the complexity of kinematic source inversion in real-time, we adopt the back-projection approach to provide constraint on rupture velocity.

  6. Tsunami waveform inversion of the 2007 Bengkulu, southern Sumatra earthquake

    NASA Astrophysics Data System (ADS)

    Fujii, Y.; Satake, K.

    2007-12-01

    We have performed tsunami waveform inversion for the 2007 Bengkulu, southern Sumatra earthquake on September 12, 2007 (4.520°S, 101.374°E, Mw=8.4 at 11:10:26 UTC according to USGS), and found that the large slips were located on deeper part (> 20 km) of the fault plane, more than 100 km from the trench axis. The deep slip might have contributed the relatively small tsunami for its earthquake size. The largest slips more than 6 m were located beneath Pagais Islands, about 100-200 km northwest of the epicenter. The obtained slip distribution yields a total seismic moment of 3.6 × 1021 Nm (Mw = 8.3). The tsunami generated by this earthquake was recorded at many tide gauge stations located in and around the Indian Ocean. The DART system installed in deep ocean and maintained by Thai Meteorological Department (TMD) also captured this tsunami. We have downloaded the tsunami waveforms at 16 stations from University of Hawaii Sea Level Center's (UHSLC) and National Oceanic & Atmospheric Administration's (NOAA) web sites. The observed tsunami records indicate that the tsunami amplitudes were less than several tens of cm at most stations, around 1 m at Padang, nearest station to the source, and a few cm at DART station. For the tsunami waveforms inversion, we divided the source area (length: 250 km, width: 200 km) into 20 subfaults. Tsunami waveforms from each subfault (50 km × 50 km) or Greens functions were calculated by numerically solving the linear shallow-water long-wave equations. We adopted the focal mechanism of Global CMT solution (strike: 327°, dip: 12°, rake: 114°) for each subfault, and assumed a rise time of 1 min. The computed tsunami waveforms from the estimated slip distribution explain the observed waveforms at most of tide gauges and DART station.

  7. Algorithms used in the Airborne Lidar Processing System (ALPS)

    USGS Publications Warehouse

    Nagle, David B.; Wright, C. Wayne

    2016-05-23

    The Airborne Lidar Processing System (ALPS) analyzes Experimental Advanced Airborne Research Lidar (EAARL) data—digitized laser-return waveforms, position, and attitude data—to derive point clouds of target surfaces. A full-waveform airborne lidar system, the EAARL seamlessly and simultaneously collects mixed environment data, including submerged, sub-aerial bare earth, and vegetation-covered topographies.ALPS uses three waveform target-detection algorithms to determine target positions within a given waveform: centroid analysis, leading edge detection, and bottom detection using water-column backscatter modeling. The centroid analysis algorithm detects opaque hard surfaces. The leading edge algorithm detects topography beneath vegetation and shallow, submerged topography. The bottom detection algorithm uses water-column backscatter modeling for deeper submerged topography in turbid water.The report describes slant range calculations and explains how ALPS uses laser range and orientation measurements to project measurement points into the Universal Transverse Mercator coordinate system. Parameters used for coordinate transformations in ALPS are described, as are Interactive Data Language-based methods for gridding EAARL point cloud data to derive digital elevation models. Noise reduction in point clouds through use of a random consensus filter is explained, and detailed pseudocode, mathematical equations, and Yorick source code accompany the report.

  8. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-04-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.

  9. Flexible kinematic earthquake rupture inversion of tele-seismic waveforms: Application to the 2013 Balochistan, Pakistan earthquake

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; Yagi, Y.; Okuwaki, R.; Kasahara, A.

    2017-12-01

    The kinematic earthquake rupture models are useful to derive statistics and scaling properties of the large and great earthquakes. However, the kinematic rupture models for the same earthquake are often different from one another. Such sensitivity of the modeling prevents us to understand the statistics and scaling properties of the earthquakes. Yagi and Fukahata (2011) introduces the uncertainty of Green's function into the tele-seismic waveform inversion, and shows that the stable spatiotemporal distribution of slip-rate can be obtained by using an empirical Bayesian scheme. One of the unsolved problems in the inversion rises from the modeling error originated from an uncertainty of a fault-model setting. Green's function near the nodal plane of focal mechanism is known to be sensitive to the slight change of the assumed fault geometry, and thus the spatiotemporal distribution of slip-rate should be distorted by the modeling error originated from the uncertainty of the fault model. We propose a new method accounting for the complexity in the fault geometry by additionally solving the focal mechanism on each space knot. Since a solution of finite source inversion gets unstable with an increasing of flexibility of the model, we try to estimate a stable spatiotemporal distribution of focal mechanism in the framework of Yagi and Fukahata (2011). We applied the proposed method to the 52 tele-seismic P-waveforms of the 2013 Balochistan, Pakistan earthquake. The inverted-potency distribution shows unilateral rupture propagation toward southwest of the epicenter, and the spatial variation of the focal mechanisms shares the same pattern as the fault-curvature along the tectonic fabric. On the other hand, the broad pattern of rupture process, including the direction of rupture propagation, cannot be reproduced by an inversion analysis under the assumption that the faulting occurred on a single flat plane. These results show that the modeling error caused by simplifying the fault model is non-negligible in the tele-seismic waveform inversion of the 2013 Balochistan, Pakistan earthquake.

  10. Multi-Scale Peak and Trough Detection Optimised for Periodic and Quasi-Periodic Neuroscience Data.

    PubMed

    Bishop, Steven M; Ercole, Ari

    2018-01-01

    The reliable detection of peaks and troughs in physiological signals is essential to many investigative techniques in medicine and computational biology. Analysis of the intracranial pressure (ICP) waveform is a particular challenge due to multi-scale features, a changing morphology over time and signal-to-noise limitations. Here we present an efficient peak and trough detection algorithm that extends the scalogram approach of Scholkmann et al., and results in greatly improved algorithm runtime performance. Our improved algorithm (modified Scholkmann) was developed and analysed in MATLAB R2015b. Synthesised waveforms (periodic, quasi-periodic and chirp sinusoids) were degraded with white Gaussian noise to achieve signal-to-noise ratios down to 5 dB and were used to compare the performance of the original Scholkmann and modified Scholkmann algorithms. The modified Scholkmann algorithm has false-positive (0%) and false-negative (0%) detection rates identical to the original Scholkmann when applied to our test suite. Actual compute time for a 200-run Monte Carlo simulation over a multicomponent noisy test signal was 40.96 ± 0.020 s (mean ± 95%CI) for the original Scholkmann and 1.81 ± 0.003 s (mean ± 95%CI) for the modified Scholkmann, demonstrating the expected improvement in runtime complexity from [Formula: see text] to [Formula: see text]. The accurate interpretation of waveform data to identify peaks and troughs is crucial in signal parameterisation, feature extraction and waveform identification tasks. Modification of a standard scalogram technique has produced a robust algorithm with linear computational complexity that is particularly suited to the challenges presented by large, noisy physiological datasets. The algorithm is optimised through a single parameter and can identify sub-waveform features with minimal additional overhead, and is easily adapted to run in real time on commodity hardware.

  11. Source mechanism of the 2006 M5.1 Wen'an Earthquake determined from a joint inversion of local and teleseismic broadband waveform data

    NASA Astrophysics Data System (ADS)

    Huang, J.; Ni, S.; Niu, F.; Fu, R.

    2007-12-01

    On July 4th, 2006, a magnitude 5.1 earthquake occurred at Wen'an, {~}100 km south of Beijing, which was felt at Beijing metropolitan area. To better understand the regional tectonics, we have inverted local and teleseismic broadband waveform data to determine the focal mechanism of this earthquake. We selected waveform data of 9 stations from the recently installed Beijing metropolitan digital Seismic Network (BSN). These stations are located within 600 km and cover a good azimuthal range to the earthquake. To better fit the lower amplitude P waveform, we employed two different weights for the P wave and surface wave arrivals, respectively. A grid search method was employed to find the strike, dip and slip of the earthquake that best fits the P and surface waveforms recorded at all the three components (the tangential component of the P-wave arrivals was not used). Synthetic waveforms were computed with an F-K method. Two crustal velocity models were used in the synthetic calculation to reflect a rapid east-west transition in crustal structure observed by seismic and geological studies in the study area. The 3D grid search results in reasonable constraints on the fault geometry and the slip vector with a less well determined focal depth. As such we combined teleseismic waveform data from 8 stations of the Global Seismic Network in a joint inversion. Clearly identifiable depth phases (pP, sP) recorded in the teleseismic stations obviously provided a better constraint on the resulting source depth. Results from the joint inversion indicate that the Wen'an earthquake is mainly a right-lateral strike slip event (-150°) which occurred at a near vertical (dip, 80° ) NNE trend (210°º) fault. The estimated focal depth is {~}14- 15km, and the moment magnitude is 5.1. The estimated fault geometry here agrees well with aftershock distribution and is consistent with the major fault systems in the area which were developed under a NNE-SSW oriented compressional stress field. Key word: waveform modeling method, source mechanism, grid search method, cut and paste method, aftershocks distribution

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    bedle, H; Matzel, E; Flanagan, M

    This report summarizes the data analysis achieved during Heather Bedle's eleven-week Technical Scholar internship at Lawrence Livermore National Labs during the early summer 2006. The work completed during this internship resulted in constraints on the crustal and upper mantle S-velocity structure in Northern Africa, the Mediterranean, the Middle East, and Europe, through the fitting of regional waveform data. This data extends current raypath coverage and will be included in a joint inversion along with data from surface wave group velocity measurements, S and P teleseismic arrival time data, and receiver function data to create an improved velocity model of themore » upper mantle in this region. The tectonic structure of the North African/Mediterranean/Europe/Middle Eastern study region is extremely heterogeneous. This region consists of, among others, stable cratons and platforms such as the West Africa Craton, and Baltica in Northern Europe; oceanic subduction zones throughout the Mediterranean Sea where the African and Eurasian plate collide; regions of continental collision as the Arabian Plate moves northward into the Turkish Plate; and rifting in the Red Sea, separating the Arabian and Nubian shields. With such diverse tectonic structures, many of the waveforms were difficult to fit. This is not unexpected as the waveforms are fit using an averaged structure. In many cases the raypaths encounter several tectonic features, complicating the waveform, and making it hard for the software to converge on a 1D average structure. Overall, the quality of the waveform data was average, with roughly 30% of the waveforms being discarded due to excessive noise that interfered with the frequency ranges of interest. An inversion for the 3D S-velocity structure of this region was also performed following the methodology of Partitioned Waveform Inversion (Nolet, 1990; Van der Lee and Nolet, 1997). The addition of the newly fit waveforms drastically extends the range of the model. The model now extends as far east in Africa to cover Chad and Niger, and reaches south to cover Zambia. The model is also stretched eastward to cover the eastern half of India, and northward to cover the southern portion of Scandinavia.« less

  13. Global and local waveform simulations using the VERCE platform

    NASA Astrophysics Data System (ADS)

    Garth, Thomas; Saleh, Rafiq; Spinuso, Alessandro; Gemund, Andre; Casarotti, Emanuele; Magnoni, Federica; Krischner, Lion; Igel, Heiner; Schlichtweg, Horst; Frank, Anton; Michelini, Alberto; Vilotte, Jean-Pierre; Rietbrock, Andreas

    2017-04-01

    In recent years the potential to increase resolution of seismic imaging by full waveform inversion has been demonstrated on a range of scales from basin to continental scales. These techniques rely on harnessing the computational power of large supercomputers, and running large parallel codes to simulate the seismic wave field in a three-dimensional geological setting. The VERCE platform is designed to make these full waveform techniques accessible to a far wider spectrum of the seismological community. The platform supports the two widely used spectral element simulation programs SPECFEM3D Cartesian, and SPECFEM3D globe, allowing users to run a wide range of simulations. In the SPECFEM3D Cartesian implementation the user can run waveform simulations on a range of pre-loaded meshes and velocity models for specific areas, or upload their own velocity model and mesh. In the new SPECFEM3D globe implementation, the user will be able to select from a number of continent scale model regions, or perform waveform simulations for the whole earth. Earthquake focal mechanisms can be downloaded within the platform, for example from the GCMT catalogue, or users can upload their own focal mechanism catalogue through the platform. The simulations can be run on a range of European supercomputers in the PRACE network. Once a job has been submitted and run through the platform, the simulated waveforms can be manipulated or downloaded for further analysis. The misfit between the simulated and recorded waveforms can then be calculated through the platform through three interoperable workflows, for raw-data access (FDSN) and caching, pre-processing and finally misfit. The last workflow makes use of the Pyflex analysis software. In addition, the VERCE platform can be used to produce animations of waveform propagation through the velocity model, and synthetic shakemaps. All these data-products are made discoverable and re-usable thanks to the VERCE data and metadata management layer. We demonstrate the functionality of the VERCE platform with two use cases, one using the pre-loaded velocity model and mesh for the Maule area of Chile using the SPECFEM3D Cartesian workflow, and one showing the output of a global simulation using the SPECFEM3D globe workflow. It is envisioned that this tool will allow a much greater range of seismologists to access these full waveform inversion tools, and aid full waveform tomographic and source inversion, synthetic shakemap production and other full waveform applications, in a wide range of tectonic settings.

  14. Waveform fitting and geometry analysis for full-waveform lidar feature extraction

    NASA Astrophysics Data System (ADS)

    Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu

    2016-10-01

    This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.

  15. Development of water level estimation algorithms using SARAL/Altika dataset and validation over the Ukai reservoir, India

    NASA Astrophysics Data System (ADS)

    Chander, Shard; Ganguly, Debojyoti

    2017-01-01

    Water level was estimated, using AltiKa radar altimeter onboard the SARAL satellite, over the Ukai reservoir using modified algorithms specifically for inland water bodies. The methodology was based on waveform classification, waveform retracking, and dedicated inland range corrections algorithms. The 40-Hz waveforms were classified based on linear discriminant analysis and Bayesian classifier. Waveforms were retracked using Brown, Ice-2, threshold, and offset center of gravity methods. Retracking algorithms were implemented on full waveform and subwaveforms (only one leading edge) for estimating the improvement in the retrieved range. European Centre for Medium-Range Weather Forecasts (ECMWF) operational, ECMWF re-analysis pressure fields, and global ionosphere maps were used to exactly estimate the range corrections. The microwave and optical images were used for estimating the extent of the water body and altimeter track location. Four global positioning system (GPS) field trips were conducted on same day as the SARAL pass using two dual frequency GPS. One GPS was mounted close to the dam in static mode and the other was used on a moving vehicle within the reservoir in Kinematic mode. In situ gauge dataset was provided by the Ukai dam authority for the time period January 1972 to March 2015. The altimeter retrieved water level results were then validated with the GPS survey and in situ gauge dataset. With good selection of virtual station (waveform classification, back scattering coefficient), Ice-2 retracker and subwaveform retracker both work better with an overall root-mean-square error <15 cm. The results support that the AltiKa dataset, due to a smaller foot-print and sharp trailing edge of the Ka-band waveform, can be utilized for more accurate water level information over inland water bodies.

  16. Waveform analysis-guided treatment versus a standard shock-first protocol for the treatment of out-of-hospital cardiac arrest presenting in ventricular fibrillation: results of an international randomized, controlled trial.

    PubMed

    Freese, John P; Jorgenson, Dawn B; Liu, Ping-Yu; Innes, Jennifer; Matallana, Luis; Nammi, Krishnakant; Donohoe, Rachael T; Whitbread, Mark; Silverman, Robert A; Prezant, David J

    2013-08-27

    Ventricular fibrillation (VF) waveform properties have been shown to predict defibrillation success and outcomes among patients treated with immediate defibrillation. We postulated that a waveform analysis algorithm could be used to identify VF unlikely to respond to immediate defibrillation, allowing selective initial treatment with cardiopulmonary resuscitation in an effort to improve overall survival. In a multicenter, double-blind, randomized study, out-of-hospital cardiac arrest patients in 2 urban emergency medical services systems were treated with automated external defibrillators using either a VF waveform analysis algorithm or the standard shock-first protocol. The VF waveform analysis used a predefined threshold value below which return of spontaneous circulation (ROSC) was unlikely with immediate defibrillation, allowing selective treatment with a 2-minute interval of cardiopulmonary resuscitation before initial defibrillation. The primary end point was survival to hospital discharge. Secondary end points included ROSC, sustained ROSC, and survival to hospital admission. Of 6738 patients enrolled, 987 patients with VF of primary cardiac origin were included in the primary analysis. No immediate or long-term survival benefit was noted for either treatment algorithm (ROSC, 42.5% versus 41.2%, P=0.70; sustained ROSC, 32.4% versus 33.4%, P=0.79; survival to admission, 34.1% versus 36.4%, P=0.46; survival to hospital discharge, 15.6% versus 17.2%, P=0.55, respectively). Use of a waveform analysis algorithm to guide the initial treatment of out-of-hospital cardiac arrest patients presenting in VF did not improve overall survival compared with a standard shock-first protocol. Further study is recommended to examine the role of waveform analysis for the guided management of VF.

  17. Extended target recognition in cognitive radar networks.

    PubMed

    Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin

    2010-01-01

    We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.

  18. Estimation of Dynamic Friction Process of the Akatani Landslide Based on the Waveform Inversion and Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mangeney, A.; Moretti, L.; Matsushi, Y.

    2014-12-01

    Understanding physical parameters, such as frictional coefficients, velocity change, and dynamic history, is important issue for assessing and managing the risks posed by deep-seated catastrophic landslides. Previously, landslide motion has been inferred qualitatively from topographic changes caused by the event, and occasionally from eyewitness reports. However, these conventional approaches are unable to evaluate source processes and dynamic parameters. In this study, we use broadband seismic recordings to trace the dynamic process of the deep-seated Akatani landslide that occurred on the Kii Peninsula, Japan, which is one of the best recorded large slope failures. Based on the previous results of waveform inversions and precise topographic surveys done before and after the event, we applied numerical simulations using the SHALTOP numerical model (Mangeney et al., 2007). This model describes homogeneous continuous granular flows on a 3D topography based on a depth averaged thin layer approximation. We assume a Coulomb's friction law with a constant friction coefficient, i. e. the friction is independent of the sliding velocity. We varied the friction coefficients in the simulation so that the resulting force acting on the surface agrees with the single force estimated from the seismic waveform inversion. Figure shows the force history of the east-west components after the band-pass filtering between 10-100 seconds. The force history of the simulation with frictional coefficient 0.27 (thin red line) the best agrees with the result of seismic waveform inversion (thick gray line). Although the amplitude is slightly different, phases are coherent for the main three pulses. This is an evidence that the point-source approximation works reasonably well for this particular event. The friction coefficient during the sliding was estimated to be 0.38 based on the seismic waveform inversion performed by the previous study and on the sliding block model (Yamada et al., 2013), whereas the frictional coefficient estimated from the numerical simulation was about 0.27. This discrepancy may be due to the digital elevation model, to the other forces such as pressure gradients and centrifugal acceleration included in the model. However, quantitative interpretation of this difference requires further investigation.

  19. Geophysical characterization of peatlands using crosshole GPR full-waveform inversion: Case study from a bog in northwestern Germany

    NASA Astrophysics Data System (ADS)

    Schmäck, J.; Klotzsche, A.; Van Der Kruk, J.; Vereecken, H.; Bechtold, M.

    2017-12-01

    The characterization of peatlands is of particular interest, since areas with peat soils represent global hotspots for the exchange of greenhouse gases. Their effect on global warming depends on several parameters, like mean annual water level and land use. Models of greenhouse gas emissions and carbon accumulation in peatlands can be improved by including small-scale soil properties that e.g. act as gas traps and periodically release gases to the atmosphere during ebullition events. Ground penetrating radar (GPR) is well suited to non- or minimal invasively characterize and improve our understanding of dynamic processes that take place in the critical zone. It uses high frequency electromagnetic waves to image and characterize the dielectric permittivity and electrical conductivity of the critical zone, which can be related to hydrogeological properties like porosity, soil water content, salinity and clay content. In the last decade, the full-waveform inversion of crosshole GPR data has proved to be a powerful tool to improve the image resolution compared to standard ray-based methods. This approach was successfully applied to several different aquifers and was able to provide decimeter-scale resolution images including small-scale high contrast layers that can be related to zones of high porosity, zones of preferential flow or clay lenses. The comparison to independently measured e.g. logging data proved the reliability of the method. Here, for the first time crosshole GPR full-waveform inversion is used to image three peatland plots with different land use that are part of the "Ahlen-Falkenberger Moor peat bog complex" in northwestern Germany. The full-waveform inversion of the acquired data returned higher resolution images than standard ray-based GPR methods, and, is able to improve our understanding of subsurface structures. The comparison of the different plots is expected to provide new insights into gas content and gas trapping structures across different land uses. Additionally, season-related changes of peatland soil properties are investigated. The crosshole GPR full-waveform inversion was successfully applied to several datasets and the results show the utility and credibility of GPR FWI to analyze peatland properties.

  20. Forward and Inverse Modeling of Near-Field Seismic Waveforms from Underground Nuclear Explosions for Effective Source Functions and Structure Parameters.

    DTIC Science & Technology

    1987-04-05

    IP o , I-S " M4.7 :" * AMIWILTON & U, .-- EALY(I969) : o H CARROLL(1966) HADLEY (19811 C . Figure 2. P and S-wave velocity structure for Pahute Mesa...8217; 0 .02 s wh ilIe S -. cI by C ) >, s) thIe kta i Is o f t he wav e for:7s are quite well modeled bot h ir tr~~e inversion nd in tefrad mod e Iin~ indi...ESTIMATION 7-Te source parameters determined through waveform inversion for the fo: s o r c ri i c e h v h s~ ahute Mesa events studied are sum.:rarited in

  1. Waveform inversion in the frequency domain for the simultaneous determination of earthquake source mechanism and moment function

    NASA Astrophysics Data System (ADS)

    Nakano, M.; Kumagai, H.; Inoue, H.

    2008-06-01

    We propose a method of waveform inversion to rapidly and routinely estimate both the moment function and the centroid moment tensor (CMT) of an earthquake. In this method, waveform inversion is carried out in the frequency domain to obtain the moment function more rapidly than when solved in the time domain. We assume a pure double-couple source mechanism in order to stabilize the solution when using data from a small number of seismic stations. The fault and slip orientations are estimated by a grid search with respect to the strike, dip and rake angles. The moment function in the time domain is obtained from the inverse Fourier transform of the frequency components determined by the inversion. Since observed waveforms used for the inversion are limited in a particular frequency band, the estimated moment function is a bandpassed form. We develop a practical approach to estimate the deconvolved form of the moment function, from which we can reconstruct detailed rupture history and the seismic moment. The source location is determined by a spatial grid search using adaptive grid spacings, which are gradually decreased in each step of the search. We apply this method to two events that occurred in Indonesia by using data from a broad-band seismic network in Indonesia (JISNET): one northeast of Sulawesi (Mw = 7.5) on 2007 January 21, and the other south of Java (Mw = 7.5) on 2006 July 17. The source centroid locations and mechanisms we estimated for both events are consistent with those determined by the Global CMT Project and the National Earthquake Information Center of the U.S. Geological Survey. The estimated rupture duration of the Sulawesi event is 16 s, which is comparable to a typical duration for earthquakes of this magnitude, while that of the Java event is anomalously long (176 s), suggesting that this event was a tsunami earthquake. Our application demonstrates that this inversion method has great potential for rapid and routine estimations of both the CMT and the moment function, and may be useful for identification of tsunami earthquakes.

  2. Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods

    NASA Astrophysics Data System (ADS)

    Thurin, J.; Brossier, R.; Métivier, L.

    2017-12-01

    Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012

  3. Elastic full waveform inversion based on the homogenization method: theoretical framework and 2-D numerical illustrations

    NASA Astrophysics Data System (ADS)

    Capdeville, Yann; Métivier, Ludovic

    2018-05-01

    Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.

  4. High resolution aquifer characterization using crosshole GPR full-waveform tomography: Comparison with direct-push and tracer test data

    NASA Astrophysics Data System (ADS)

    Gueting, Nils; Vienken, Thomas; Klotzsche, Anja; van der Kruk, Jan; Vanderborght, Jan; Caers, Jef; Vereecken, Harry; Englert, Andreas

    2017-01-01

    Limited knowledge about the spatial distribution of aquifer properties typically constrains our ability to predict subsurface flow and transport. Here we investigate the value of using high resolution full-waveform inversion of cross-borehole ground penetrating radar (GPR) data for aquifer characterization. By stitching together GPR tomograms from multiple adjacent crosshole planes, we are able to image, with a decimeter scale resolution, the dielectric permittivity and electrical conductivity of an alluvial aquifer along cross sections of 50 m length and 10 m depth. A logistic regression model is employed to predict the spatial distribution of lithological facies on the basis of the GPR results. Vertical profiles of porosity and hydraulic conductivity from direct-push, flowmeter and grain size data suggest that the GPR predicted facies classification is meaningful with regard to porosity and hydraulic conductivity, even though the distributions of individual facies show some overlap and the absolute hydraulic conductivities from the different methods (direct-push, flowmeter, grain size) differ up to approximately one order of magnitude. Comparison of the GPR predicted facies architecture with tracer test data suggests that the plume splitting observed in a tracer experiment was caused by a hydraulically low-conductive sand layer with a thickness of only a few decimeters. Because this sand layer is identified by GPR full-waveform inversion but not by conventional GPR ray-based inversion we conclude that the improvement in spatial resolution due to full-waveform inversion is crucial to detect small-scale aquifer structures that are highly relevant for solute transport.

  5. GP Workbench Manual: Technical Manual, User's Guide, and Software Guide

    USGS Publications Warehouse

    Oden, Charles P.; Moulton, Craig W.

    2006-01-01

    GP Workbench is an open-source general-purpose geophysical data processing software package written primarily for ground penetrating radar (GPR) data. It also includes support for several USGS prototype electromagnetic instruments such as the VETEM and ALLTEM. The two main programs in the package are GP Workbench and GP Wave Utilities. GP Workbench has routines for filtering, gridding, and migrating GPR data; as well as an inversion routine for characterizing UXO (unexploded ordinance) using ALLTEM data. GP Workbench provides two-dimensional (section view) and three-dimensional (plan view or time slice view) processing for GPR data. GP Workbench can produce high-quality graphics for reports when Surfer 8 or higher (Golden Software) is installed. GP Wave Utilities provides a wide range of processing algorithms for single waveforms, such as filtering, correlation, deconvolution, and calculating GPR waveforms. GP Wave Utilities is used primarily for calibrating radar systems and processing individual traces. Both programs also contain research features related to the calibration of GPR systems and calculating subsurface waveforms. The software is written to run on the Windows operating systems. GP Workbench can import GPR data file formats used by major commercial instrument manufacturers including Sensors and Software, GSSI, and Mala. The GP Workbench native file format is SU (Seismic Unix), and subsequently, files generated by GP Workbench can be read by Seismic Unix as well as many other data processing packages.

  6. 3D Seismic Experimentation and Advanced Processing/Inversion Development for Investigations of the Shallow Subsurface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levander, Alan Richard; Zelt, Colin A.

    2015-03-17

    The work plan for this project was to develop and apply advanced seismic reflection and wide-angle processing and inversion techniques to high resolution seismic data for the shallow subsurface to seismically characterize the shallow subsurface at hazardous waste sites as an aid to containment and cleanup activities. We proposed to continue work on seismic data that we had already acquired under a previous DoE grant, as well as to acquire additional new datasets for analysis. The project successfully developed and/or implemented the use of 3D reflection seismology algorithms, waveform tomography and finite-frequency tomography using compressional and shear waves for highmore » resolution characterization of the shallow subsurface at two waste sites. These two sites have markedly different near-surface structures, groundwater flow patterns, and hazardous waste problems. This is documented in the list of refereed documents, conference proceedings, and Rice graduate theses, listed below.« less

  7. Two-dimensional frequency-domain acoustic full-waveform inversion with rugged topography

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Jiang; Dai, Shi-Kun; Chen, Long-Wei; Li, Kun; Zhao, Dong-Dong; Huang, Xing-Xing

    2015-09-01

    We studied finite-element-method-based two-dimensional frequency-domain acoustic FWI under rugged topography conditions. The exponential attenuation boundary condition suitable for rugged topography is proposed to solve the cutoff boundary problem as well as to consider the requirement of using the same subdivision grid in joint multifrequency inversion. The proposed method introduces the attenuation factor, and by adjusting it, acoustic waves are sufficiently attenuated in the attenuation layer to minimize the cutoff boundary effect. Based on the law of exponential attenuation, expressions for computing the attenuation factor and the thickness of attenuation layers are derived for different frequencies. In multifrequency-domain FWI, the conjugate gradient method is used to solve equations in the Gauss-Newton algorithm and thus minimize the computation cost in calculating the Hessian matrix. In addition, the effect of initial model selection and frequency combination on FWI is analyzed. Examples using numerical simulations and FWI calculations are used to verify the efficiency of the proposed method.

  8. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  9. Robust spike classification based on frequency domain neural waveform features.

    PubMed

    Yang, Chenhui; Yuan, Yuan; Si, Jennie

    2013-12-01

    We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm does not require any assumptions on statistical properties of the noise and proves to be robust under noise contamination.

  10. Deghosting based on the transmission matrix method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong

    2017-12-01

    As the developments of seismic exploration and subsequent seismic exploitation advance, marine acquisition systems with towed streamers become an important seismic data acquisition method. But the existing air-water reflective interface can generate surface related multiples, including ghosts, which can affect the accuracy and performance of the following seismic data processing algorithms. Thus, we derive a deghosting method from a new perspective, i.e. using the transmission matrix (T-matrix) method instead of inverse scattering series. The T-matrix-based deghosting algorithm includes all scattering effects and is convergent absolutely. Initially, the effectiveness of the proposed method is demonstrated using synthetic data obtained from a designed layered model, and its noise-resistant property is also illustrated using noisy synthetic data contaminated by random noise. Numerical examples on complicated data from the open SMAART Pluto model and field marine data further demonstrate the validity and flexibility of the proposed method. After deghosting, low frequency components are recovered reasonably and the fake high frequency components are attenuated, and the recovered low frequency components will be useful for the subsequent full waveform inversion. The proposed deghosting method is currently suitable for two-dimensional towed streamer cases with accurate constant depth information and its extension into variable-depth streamers in three-dimensional cases will be studied in the future.

  11. Single pulse analysis of intracranial pressure for a hydrocephalus implant.

    PubMed

    Elixmann, I M; Hansinger, J; Goffin, C; Antes, S; Radermacher, K; Leonhardt, S

    2012-01-01

    The intracranial pressure (ICP) waveform contains important diagnostic information. Changes in ICP are associated with changes of the pulse waveform. This change has explicitly been observed in 13 infusion tests by analyzing 100 Hz ICP data. An algorithm is proposed which automatically extracts the pulse waves and categorizes them into predefined patterns. A developed algorithm determined 88 %±8 % (mean ±SD) of all classified pulse waves correctly on predefined patterns. This algorithm has low computational cost and is independent of a pressure drift in the sensor by using only the relationship between special waveform characteristics. Hence, it could be implemented on a microcontroller of a future electromechanic hydrocephalus shunt system to control the drainage of cerebrospinal fluid (CSF).

  12. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    NASA Astrophysics Data System (ADS)

    Voronina, T. A.; Romanenko, A. A.

    2016-12-01

    Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  13. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  14. Application of an iterative least-squares waveform inversion of strong-motion and teleseismic records to the 1978 Tabas, Iran, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Mendoza, C.

    1991-01-01

    An iterative least-squares technique is used to simultaneously invert the strong-motion records and teleseismic P waveforms for the 1978 Tabas, Iran, earthquake to deduce the rupture history. The effects of using different data sets and different parametrizations of the problem (linear versus nonlinear) are considered. A consensus of all the inversion runs indicates a complex, multiple source for the Tabas earthquake, with four main source regions over a fault length of 90 km and an average rupture velocity of 2.5 km/sec. -from Authors

  15. Predicting Electrocardiogram and Arterial Blood Pressure Waveforms with Different Echo State Network Architectures

    DTIC Science & Technology

    2014-11-01

    networks were trained to predict an individual’s electrocardiogram (ECG) and arterial blood pressure ( ABP ) waveform data, which can potentially help...various ESN architectures for prediction tasks, and establishes the benefits of using ESN architecture designs for predicting ECG and ABP waveforms...arterial blood pressure ( ABP ) waveforms immediately prior to the machine generated alarms. When tested, the algorithm suppressed approximately 59.7

  16. Adaptive thresholding with inverted triangular area for real-time detection of the heart rate from photoplethysmogram traces on a smartphone.

    PubMed

    Jiang, Wen Jun; Wittek, Peter; Zhao, Li; Gao, Shi Chao

    2014-01-01

    Photoplethysmogram (PPG) signals acquired by smartphone cameras are weaker than those acquired by dedicated pulse oximeters. Furthermore, the signals have lower sampling rates, have notches in the waveform and are more severely affected by baseline drift, leading to specific morphological characteristics. This paper introduces a new feature, the inverted triangular area, to address these specific characteristics. The new feature enables real-time adaptive waveform detection using an algorithm of linear time complexity. It can also recognize notches in the waveform and it is inherently robust to baseline drift. An implementation of the algorithm on Android is available for free download. We collected data from 24 volunteers and compared our algorithm in peak detection with two competing algorithms designed for PPG signals, Incremental-Merge Segmentation (IMS) and Adaptive Thresholding (ADT). A sensitivity of 98.0% and a positive predictive value of 98.8% were obtained, which were 7.7% higher than the IMS algorithm in sensitivity, and 8.3% higher than the ADT algorithm in positive predictive value. The experimental results confirmed the applicability of the proposed method.

  17. High-resolution near-surface velocity model building using full-waveform inversion—a case study from southwest Sweden

    NASA Astrophysics Data System (ADS)

    Adamczyk, A.; Malinowski, M.; Malehmir, A.

    2014-06-01

    Full-waveform inversion (FWI) is an iterative optimization technique that provides high-resolution models of subsurface properties. Frequency-domain, acoustic FWI was applied to seismic data acquired over a known quick-clay landslide scar in southwest Sweden. We inverted data from three 2-D seismic profiles, 261-572 m long, two of them shot with small charges of dynamite and one with a sledgehammer. To our best knowledge this is the first published application of FWI to sledgehammer data. Both sources provided data suitable for waveform inversion, the sledgehammer data containing even wider frequency spectrum. Inversion was performed for frequency groups between 27.5 and 43.1 Hz for the explosive data and 27.5-51.0 Hz for the sledgehammer. The lowest inverted frequency was limited by the resonance frequency of the standard 28-Hz geophones used in the survey. High-velocity granitic bedrock in the area is undulated and very shallow (15-100 m below the surface), and exhibits a large P-wave velocity contrast to the overlying normally consolidated sediments. In order to mitigate the non-linearity of the inverse problem we designed a multiscale layer-stripping inversion strategy. Obtained P-wave velocity models allowed to delineate the top of the bedrock and revealed distinct layers within the overlying sediments of clays and coarse-grained materials. Models were verified in an extensive set of validating procedures and used for pre-stack depth migration, which confirmed their robustness.

  18. Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.

    PubMed

    Chang, Joshua; Paydarfar, David

    2014-12-01

    Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.

  19. Novel procedure for characterizing nonlinear systems with memory: 2017 update

    NASA Astrophysics Data System (ADS)

    Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.

    2017-05-01

    The present article discusses novel improvements in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra or 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] . The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order and alleviate the Curse of Dimensionality (COD) in order to realize practical nonlinear solutions of scientific and engineering interest.

  20. Bandlimited computerized improvements in characterization of nonlinear systems with memory

    NASA Astrophysics Data System (ADS)

    Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.

    2016-05-01

    The present article discusses some inroads in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] over many years of developmental research. The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms on the system are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order in order to combat and reasonably alleviate the curse of dimensionality.

  1. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  2. Micro-seismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  3. Waveform inversion of volcano-seismic signals for an extended source

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.

    2007-01-01

    We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.

  4. Elastic and anelastic structure of the lowermost mantle beneath the Western Pacific using waveform inversion

    NASA Astrophysics Data System (ADS)

    Konishi, K.; Deschamps, F.; Fuji, N.

    2015-12-01

    We investigate quasi-2D elastic and anelastic structure of the lowermost mantle beneath the Western Pacific by inverting S and ScS waveforms. The transverse component data were obtained from F-net for 32 deep sources beneath Tonga and Fiji, filtered between 12.5 and 200 s. We observe a regional variation of S and ScS arrival times and amplitude ratio, according to which we divide our region of interest into four sub-regions and perform 1D waveform inversion for S-wave velocity and Qμ value simultaneously. We find S-shaped structure of S-wave velocity beneath the whole region with sub-regional variation of S-wave velocity peak depths, which can explain regional difference in travel times. Qμ structure varies with sub-regions as well, but the physical interpretation has not yet done.

  5. HF band filter bank multi-carrier spread spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laraway, Stephen Andrew; Moradi, Hussein; Farhang-Boroujeny, Behrouz

    Abstract—This paper describes modifications to the filter bank multicarrier spread spectrum (FB-MC-SS) system, that was presented in [1] and [2], to enable transmission of this waveform in the HF skywave channel. FB-MC-SS is well suited for the HF channel because it performs well in channels with frequency selective fading and interference. This paper describes new algorithms for packet detection, timing recovery and equalization that are suitable for the HF channel. Also, an algorithm for optimizing the peak to average power ratio (PAPR) of the FBMC- SS waveform is presented. Application of this algorithm results in a waveform with low PAPR.more » Simulation results using a wide band HF channel model demonstrate the robustness of this system over a wide range of delay and Doppler spreads.« less

  6. A Synthetic Study on the Resolution of 2D Elastic Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Cui, C.; Wang, Y.

    2017-12-01

    Gradient based full waveform inversion is an effective method in seismic study, it makes full use of the information given by seismic records and is capable of providing a more accurate model of the interior of the earth at a relatively low computational cost. However, the strong non-linearity of the problem brings about many difficulties in the assessment of its resolution. Synthetic inversions are therefore helpful before an inversion based on real data is made. Checker-board test is a commonly used method, but it is not always reliable due to the significant difference between a checker-board and the true model. Our study aims to provide a basic understanding of the resolution of 2D elastic inversion by examining three main factors that affect the inversion result respectively: 1. The structural characteristic of the model; 2. The level of similarity between the initial model and the true model; 3. The spacial distribution of sources and receivers. We performed about 150 synthetic inversions to demonstrate how each factor contributes to quality of the result, and compared the inversion results with those achieved by checker-board tests. The study can be a useful reference to assess the resolution of an inversion in addition to regular checker-board tests, or to determine whether the seismic data of a specific region is sufficient for a successful inversion.

  7. Automatic cardiac cycle determination directly from EEG-fMRI data by multi-scale peak detection method.

    PubMed

    Wong, Chung-Ki; Luo, Qingfei; Zotev, Vadim; Phillips, Raquel; Chan, Kam Wai Clifford; Bodurka, Jerzy

    2018-03-31

    In simultaneous EEG-fMRI, identification of the period of cardioballistic artifact (BCG) in EEG is required for the artifact removal. Recording the electrocardiogram (ECG) waveform during fMRI is difficult, often causing inaccurate period detection. Since the waveform of the BCG extracted by independent component analysis (ICA) is relatively invariable compared to the ECG waveform, we propose a multiple-scale peak-detection algorithm to determine the BCG cycle directly from the EEG data. The algorithm first extracts the high contrast BCG component from the EEG data by ICA. The BCG cycle is then estimated by band-pass filtering the component around the fundamental frequency identified from its energy spectral density, and the peak of BCG artifact occurrence is selected from each of the estimated cycle. The algorithm is shown to achieve a high accuracy on a large EEG-fMRI dataset. It is also adaptive to various heart rates without the needs of adjusting the threshold parameters. The cycle detection remains accurate with the scan duration reduced to half a minute. Additionally, the algorithm gives a figure of merit to evaluate the reliability of the detection accuracy. The algorithm is shown to give a higher detection accuracy than the commonly used cycle detection algorithm fmrib_qrsdetect implemented in EEGLAB. The achieved high cycle detection accuracy of our algorithm without using the ECG waveforms makes possible to create and automate pipelines for processing large EEG-fMRI datasets, and virtually eliminates the need for ECG recordings for BCG artifact removal. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. SGRAPH (SeismoGRAPHer): Seismic waveform analysis and integrated tools in seismology

    NASA Astrophysics Data System (ADS)

    Abdelwahed, Mohamed F.

    2012-03-01

    Although numerous seismological programs are currently available, most of them suffer from the inability to manipulate different data formats and the lack of embedded seismological tools. SeismoGRAPHer, or simply SGRAPH, is a new system for maintaining and analyzing seismic waveform data in a stand-alone, Windows-based application that manipulates a wide range of data formats. SGRAPH was intended to be a tool sufficient for performing basic waveform analysis and solving advanced seismological problems. The graphical user interface (GUI) utilities and the Windows functionalities, such as dialog boxes, menus, and toolbars, simplify the user interaction with the data. SGRAPH supports common data formats, such as SAC, SEED, GSE, ASCII, and Nanometrics Y-format, and provides the ability to solve many seismological problems with built-in inversion tools. Loaded traces are maintained, processed, plotted, and saved as SAC, ASCII, or PS (post script) file formats. SGRAPH includes Generalized Ray Theory (GRT), genetic algorithm (GA), least-square fitting, auto-picking, fast Fourier transforms (FFT), and many additional tools. This program provides rapid estimation of earthquake source parameters, location, attenuation, and focal mechanisms. Advanced waveform modeling techniques are provided for crustal structure and focal mechanism estimation. SGRAPH has been employed in the Egyptian National Seismic Network (ENSN) as a tool assisting with routine work and data analysis. More than 30 users have been using previous versions of SGRAPH in their research for more than 3 years. The main features of this application are ease of use, speed, small disk space requirements, and the absence of third-party developed components. Because of its architectural structure, SGRAPH can be interfaced with newly developed methods or applications in seismology. A complete setup file, including the SGRAPH package with the online user guide, is available.

  9. Optimizing measurement geometry for seismic near-surface full waveform inversion

    NASA Astrophysics Data System (ADS)

    Nuber, André; Manukyan, Edgar; Maurer, Hansruedi

    2017-09-01

    Full waveform inversion (FWI) is an increasingly popular tool for analysing seismic data. Current practise is to record seismic data sets that are suitable for reflection processing, that is, a very dense spatial sampling and a high fold are required. Using tools from optimized experimental design (ED), we demonstrate that such a dense sampling is not necessary for FWI purposes. With a simple noise-free acoustic example, we show that only a few suitably selected source positions are required for computing high-quality images. A second, more extensive study includes elastic FWI with noise-contaminated data and free-surface boundary conditions on a typical near-surface setup, where surface waves play a crucial role. The study reveals that it is sufficient to employ a receiver spacing in the order of the minimum shear wavelength expected. Furthermore, we show that horizontally oriented sources and multicomponent receivers are the preferred option for 2-D elastic FWI, and we found that with a small amount of carefully selected source positions, similarly good results can be achieved, as if as many sources as receivers would have been employed. For the sake of simplicity, we assume in our simulations that the full data information content is available, but data pre-processing and the presence of coloured noise may impose restrictions. Our ED procedure requires an a priori subsurface model as input, but tests indicate that a relatively crude approximation to the true model is adequate. A further pre-requisite of our ED algorithm is that a suitable inversion strategy exists that accounts for the non-linearity of the FWI problem. Here, we assume that such a strategy is available. For the sake of simplicity, we consider only 2-D FWI experiments in this study, but our ED algorithm is sufficiently general and flexible, such that it can be adapted to other configurations, such as crosshole, vertical seismic profiling or 3-D surface setups, also including larger scale exploration experiments. It also offers interesting possibilities for analysing existing large-scale data sets that are too large to be inverted. With our methodology, it is possible to extract a small (and thus invertible) subset that offers similar information content as the full data set.

  10. W phase source inversion for moderate to large earthquakes (1990-2010)

    USGS Publications Warehouse

    Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.

    2012-01-01

    Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the waveforms of the target event. To deal with such difficult situations, we remove the perturbation caused by earlier disturbing events by subtracting the corresponding synthetics from the data. The CMT parameters for the disturbed event can then be retrieved using the residual seismograms. We also explore the feasibility of obtaining source parameters of smaller earthquakes in the range 6.0 ≤Mw w= 6 or larger.

  11. Moment-tensor solutions for the 24 November 1987 Superstition Hills, California, earthquakes

    USGS Publications Warehouse

    Sipkin, S.A.

    1989-01-01

    The teleseismic long-period waveforms recorded by the Global Digital Seismograph Network from the two largest Superstition Hills earthquakes are inverted using an algorithm based on optimal filter theory. These solutions differ slightly from those published in the Preliminary Determination of Epicenters Monthly Listing because a somewhat different, improved data set was used in the inversions and a time-dependent moment-tensor algorithm was used to investigate the complexity of the main shock. The foreshock (origin time 01:54:14.5, mb 5.7, Ms6.2) had a scalar moment of 2.3 ?? 1025 dyne-cm, a depth of 8km, and a mechanism of strike 217??, dip 79??, rake 4??. The main shock (origin time 13:15:56.4, mb 6.0, Ms6.6) was a complex event, consisting of at least two subevents, with a combined scalar moment of 1.0 ?? 1026 dyne-cm, a depth of 10km, and a mechanism of strike 303??, dip 89??, rake -180??. -Authors

  12. EM Bias-Correction for Ice Thickness and Surface Roughness Retrievals over Rough Deformed Sea Ice

    NASA Astrophysics Data System (ADS)

    Li, L.; Gaiser, P. W.; Allard, R.; Posey, P. G.; Hebert, D. A.; Richter-Menge, J.; Polashenski, C. M.

    2016-12-01

    The very rough ridge sea ice accounts for significant percentage of total ice areas and even larger percentage of total volume. The commonly used Radar altimeter surface detection techniques are empirical in nature and work well only over level/smooth sea ice. Rough sea ice surfaces can modify the return waveforms, resulting in significant Electromagnetic (EM) bias in the estimated surface elevations, and thus large errors in the ice thickness retrievals. To understand and quantify such sea ice surface roughness effects, a combined EM rough surface and volume scattering model was developed to simulate radar returns from the rough sea ice `layer cake' structure. A waveform matching technique was also developed to fit observed waveforms to a physically-based waveform model and subsequently correct the roughness induced EM bias in the estimated freeboard. This new EM Bias Corrected (EMBC) algorithm was able to better retrieve surface elevations and estimate the surface roughness parameter simultaneously. In situ data from multi-instrument airborne and ground campaigns were used to validate the ice thickness and surface roughness retrievals. For the surface roughness retrievals, we applied this EMBC algorithm to co-incident LiDAR/Radar measurements collected during a Cryosat-2 under-flight by the NASA IceBridge missions. Results show that not only does the waveform model fit very well to the measured radar waveform, but also the roughness parameters derived independently from the LiDAR and radar data agree very well for both level and deformed sea ice. For sea ice thickness retrievals, validation based on in-situ data from the coordinated CRREL/NRL field campaign demonstrates that the physically-based EMBC algorithm performs fundamentally better than the empirical algorithm over very rough deformed sea ice, suggesting that sea ice surface roughness effects can be modeled and corrected based solely on the radar return waveforms.

  13. A strategy for the application of frequency domain acoustic waveform tomography to marine Walkaway VSP data

    NASA Astrophysics Data System (ADS)

    Bouzidi, Y.; Takam Takougang, E. M.

    2016-12-01

    Two dimensional frequency domain acoustic waveform tomography was applied to walkaway VSP data from an oil field in a shallow water environment, offshore the United Arab Emirates, to form a high resolution velocity model of the subsurface around and away from the borehole. Five close parallel walkaway VSP lines were merged to form a 9 km line, with 1344 shots at 25 m shot interval and 4 m shot depth. Each line was recorded using a typical recording tool with 20 receivers at 15.1 m receiver intervals. The recording tool was deployed in a deviated borehole at different depths for each line (521-2742 m depth). Waveform tomography was performed following a specific inversion strategy to mitigate non-linearity. Three parameters were critical for the success of the inversion: the starting model obtained from traveltime tomography, the preconditioning of the input data used for amplitudes correction to remove of shear waves and noise, and a judicious selection of the time damping constant τ to suppress late arrivals in the Laplace-Fourier domain. Several values of the time damping constant were tested, and 2 values, 0.5 s and 0.8 s that suppress waveforms arriving after 1.2 s and 2 s respectively, were retained. The inversion was performed in 2 stages, with frequencies ranging from 5 Hz to 40 Hz. The values of the time damping term τ = 0.5 s and τ = 0.8 s were used in sequence for the frequencies 5-25 Hz, and τ = 0.8 s was used for the frequencies 25-40 Hz. A group of 5 frequencies at 0.5 Hz intervals were used and 6 iterations were performed. A velocity model that generally correlates well with the sonic log and estimated velocities from normal incidence VSP was obtained. The results confirmed the success of the inversion strategy. The velocity model shows zones with anomalous low velocities below 2000 m depth that correlate with known locations of hydrocarbons reservoirs. with known locations of hydrocarbon reservoirs. However, between 500 m and 1200 m depth, the velocity model appears to be slightly underestimated, which can be explained by possible elastic effects and out-of-plane structures not considered during the inversion. This result shows that acoustic waveform tomography can be successfully applied to walkaway VSP data when a good preconditioning of the input data and inversion strategy are used.

  14. Quantification of Uncertainty in Full-Waveform Moment Tensor Inversion for Regional Seismicity

    NASA Astrophysics Data System (ADS)

    Jian, P.; Hung, S.; Tseng, T.

    2013-12-01

    Routinely and instantaneously determined moment tensor solutions deliver basic information for investigating faulting nature of earthquakes and regional tectonic structure. The accuracy of full-waveform moment tensor inversion mostly relies on azimuthal coverage of stations, data quality and previously known earth's structure (i.e., impulse responses or Green's functions). However, intrinsically imperfect station distribution, noise-contaminated waveform records and uncertain earth structure can often result in large deviations of the retrieved source parameters from the true ones, which prohibits the use of routinely reported earthquake catalogs for further structural and tectonic interferences. Duputel et al. (2012) first systematically addressed the significance of statistical uncertainty estimation in earthquake source inversion and exemplified that the data covariance matrix, if prescribed properly to account for data dependence and uncertainty due to incomplete and erroneous data and hypocenter mislocation, cannot only be mapped onto the uncertainty estimate of resulting source parameters, but it also aids obtaining more stable and reliable results. Over the past decade, BATS (Broadband Array in Taiwan for Seismology) has steadily devoted to building up a database of good-quality centroid moment tensor (CMT) solutions for moderate to large magnitude earthquakes that occurred in Taiwan area. Because of the lack of the uncertainty quantification and reliability analysis, it remains controversial to use the reported CMT catalog directly for further investigation of regional tectonics, near-source strong ground motions, and seismic hazard assessment. In this study, we develop a statistical procedure to make quantitative and reliable estimates of uncertainty in regional full-waveform CMT inversion. The linearized inversion scheme adapting efficient estimation of the covariance matrices associated with oversampled noisy waveform data and errors of biased centroid positions is implemented and inspected for improving source parameter determination of regional seismicity in Taiwan. Synthetic inversion tests demonstrate the resolved moment tensors would better match the hypothetical CMT solutions, and tend to suppress unreal non-double-couple components and reduce the trade-off between focal mechanism and centroid depth if individual signal-to-noise ratios and correlation lengths for 3-component seismograms at each station and mislocation uncertainties are properly taken into account. We further testify the capability of our scheme in retrieving the robust CMT information for mid-sized (Mw~3.5) and offshore earthquakes in Taiwan, which offers immediate and broad applications in detailed modelling of regional stress field and deformation pattern and mapping of subsurface velocity structures.

  15. Surface Wave Mode Conversion due to Lateral Heterogeneity and its Impact on Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Datta, A.; Priestley, K. F.; Chapman, C. H.; Roecker, S. W.

    2016-12-01

    Surface wave tomography based on great circle ray theory has certain limitations which become increasingly significant with increasing frequency. One such limitation is the assumption of different surface wave modes propagating independently from source to receiver, valid only in case of smoothly varying media. In the real Earth, strong lateral gradients can cause significant interconversion among modes, thus potentially wreaking havoc with ray theory based tomographic inversions that make use of multimode information. The issue of mode coupling (with either normal modes or surface wave modes) for accurate modelling and inversion of body wave data has received significant attention in the seismological literature, but its impact on inversion of surface waveforms themselves remains much less understood.We present an empirical study with synthetic data, to investigate this problem with a two-fold approach. In the first part, 2D forward modelling using a new finite difference method that allows modelling a single mode at a time, is used to build a general picture of energy transfer among modes as a function of size, strength and sharpness of lateral heterogeneities. In the second part, we use the example of a multimode waveform inversion technique based on the Cara and Leveque (1987) approach of secondary observables, to invert our synthetic data and assess how mode conversion can affect the process of imaging the Earth. We pay special attention to ensuring that any biases or artefacts in the resulting inversions can be unambiguously attributed to mode conversion effects. This study helps pave the way towards the next generation of (non-numerical) surface wave tomography techniques geared to exploit higher frequencies and mode numbers than are typically used today.

  16. Crustal seismic structure beneath the southwest Yunnan region from joint inversion of body-wave and surface wave data

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Thurber, C. H.; Zeng, X.; Zhang, L.

    2016-12-01

    Data from 71 broadband stations of a dense transportable array deployed in southwest Yunnan makes it possible to improve the resolution of the seismic model in this region. Continuous waveforms from 12 permanent stations of the China National Seismic Network were also used in this study. We utilized one-year continuous vertical component records to compute ambient noise cross-correlation functions (NCF). More than 3,000 NCFs were obtained and used to measure group velocities between 5 and 25 seconds with the frequency-time analysis method. This frequency band is most sensitive to crustal seismic structure, especially the upper and middle crust. The group velocity at short-period shows a clear azimuthal anisotropy with a north-south fast direction. The fast direction is consistent with previous seismic results revealed from shear wave splitting. More than 2,000 group velocity measurements were employed to invert the surface wave dispersion data for group velocity maps. We applied a finite difference forward modeling algorithm with an iterative inversion. A new body-wave and surface wave joint inversion algorithm (Fang et al., 2016) was utilized to improve the resolution of both P and S models. About 60,000 P wave and S wave arrivals from 1,780 local earthquakes, which occurred from May 2011 to December 2013 with magnitudes larger than 2.0, were manually picked. The new high-resolution seismic structure shows good consistency with local geological features, e.g. Tengchong Volcano. The earthquake locations also were refined with our new velocity model.

  17. Effects of Forest Disturbances on Forest Structural Parameters Retrieval from Lidar Waveform Data

    NASA Technical Reports Server (NTRS)

    Ranson, K, Lon; Sun, G.

    2011-01-01

    The effect of forest disturbance on the lidar waveform and the forest biomass estimation was demonstrated by model simulation. The results show that the correlation between stand biomass and the lidar waveform indices changes when the stand spatial structure changes due to disturbances rather than the natural succession. This has to be considered in developing algorithms for regional or global mapping of biomass from lidar waveform data.

  18. SeisFlows-Flexible waveform inversion software

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.; Borisov, Dmitry; Lefebvre, Matthieu; Tromp, Jeroen

    2018-06-01

    SeisFlows is an open source Python package that provides a customizable waveform inversion workflow and framework for research in oil and gas exploration, earthquake tomography, medical imaging, and other areas. New methods can be rapidly prototyped in SeisFlows by inheriting from default inversion or migration classes, and code can be tested on 2D examples before application to more expensive 3D problems. Wave simulations must be performed using an external software package such as SPECFEM3D. The ability to interface with external solvers lends flexibility, and the choice of SPECFEM3D as a default option provides optional GPU acceleration and other useful capabilities. Through support for massively parallel solvers and interfaces for high-performance computing (HPC) systems, inversions with thousands of seismic traces and billions of model parameters can be performed. So far, SeisFlows has run on clusters managed by the Department of Defense, Chevron Corp., Total S.A., Princeton University, and the University of Alaska, Fairbanks.

  19. Identification of complex stiffness tensor from waveform reconstruction

    NASA Astrophysics Data System (ADS)

    Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.

    2002-03-01

    An inverse method is proposed in order to determine the viscoelastic properties of composite-material plates from the plane-wave transmitted acoustic field. Analytical formulations of both the plate transmission coefficient and its first and second derivatives are established, and included in a two-step inversion scheme. Two objective functions to be minimized are then designed by considering the well-known maximum-likelihood principle and by using an analytic signal formulation. Through these innovative objective functions, the robustness of the inversion process against high level of noise in waveforms is improved and the method can be applied to a very thin specimen. The suitability of the inversion process for viscoelastic property identification is demonstrated using simulated data for composite materials with different anisotropy and damping degrees. A study of the effect of the rheologic model choice on the elastic property identification emphasizes the relevance of using a phenomenological description considering viscosity. Experimental characterizations show then the good reliability of the proposed approach. Difficulties arise experimentally for particular anisotropic media.

  20. Real-time monitoring and massive inversion of source parameters of very long period seismic signals: An application to Stromboli Volcano, Italy

    USGS Publications Warehouse

    Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.

    2006-01-01

    We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.

  1. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  2. Investigation of Seismic Events associated with the Sinkhole at Napoleonville Salt Dome, Louisiana

    NASA Astrophysics Data System (ADS)

    Nayak, A.; Dreger, D. S.

    2015-12-01

    This study describes the ongoing efforts in analysis of the intense sequence of complex seismic events associated with the formation of a large sinkhole at Napoleonville Salt Dome, Assumption Parish, Louisiana in August 2012. Point source centroid seismic moment tensor (MT) inversion of these events using data from a temporary network of broadband stations established by the United States Geological Survey had previously revealed large volume-increase components. We investigate the effect of 3D velocity structure of the salt dome on wave propagation in the frequency range of interest (0.1-0.3 Hz) by forward modeling synthetic waveforms using MT solutions that were computed using Green's functions assuming two separate 1D velocity models for stations over the salt dome and stations on the sedimentary strata surrounding the salt dome separately. We also use a matched filter technique to detect smaller events that went undetected by the automated grid-search based scanning and MT inversion algorithm using the waveforms of the larger events as templates. We also analyze the change in spectral content of the events, many of which exhibit a spectral peak at 0.4 Hz with a duration of > 60 seconds. The decrease in spectral amplitudes with distance also gives an estimate of high anelastic attenuation that damps reverberations within the shallow low velocity layers. Finally, we use noise cross-correlation analysis to explore changes in the green's functions during the development of the sinkhole and verify the sediment velocity model by comparing observed and synthetic surface wave dispersion.

  3. Multi-parameter Full-waveform Inversion for Acoustic VTI Medium with Surface Seismic Data

    NASA Astrophysics Data System (ADS)

    Cheng, X.; Jiao, K.; Sun, D.; Huang, W.; Vigh, D.

    2013-12-01

    Full-waveform Inversion (FWI) attracts wide attention recently in oil and gas industry as a new promising tool for high resolution subsurface velocity model building. While the traditional common image point gather based tomography method aims to focus post-migrated data in depth domain, FWI aims to directly fit the observed seismic waveform in either time or frequency domain. The inversion is performed iteratively by updating the velocity fields to reduce the difference between the observed and the simulated data. It has been shown the inversion is very sensitive to the starting velocity fields, and data with long offsets and low frequencies is crucial for the success of FWI to overcome this sensitivity. Considering the importance of data with long offsets and low frequencies, in most geologic environment, anisotropy is an unavoidable topic for FWI especially at long offsets, since anisotropy tends to have more pronounced effects on waves traveled for a great distance. In VTI medium, this means more horizontal velocity will be registered in middle-to-long offset data, while more vertical velocity will be registered in near-to-middle offset data. Up to date, most of real world applications of FWI still remain in isotropic medium, and only a few studies have been shown to account for anisotropy. And most of those studies only account for anisotropy in waveform simulation, but not invert for those anisotropy fields. Multi-parameter inversion for anisotropy fields, even in VTI medium, remains as a hot topic in the field. In this study, we develop a strategy for multi-parameter FWI for acoustic VTI medium with surface seismic data. Because surface seismic data is insensitivity to the delta fields, we decide to hold the delta fields unchanged during our inversion, and invert only for vertical velocity and epsilon fields. Through parameterization analysis and synthetic tests, we find that it is more feasible to invert for the parameterization as vertical and horizontal velocities instead of inverting for the parameterization as vertical velocity and epsilon fields. We develop a hierarchical approach to invert for vertical velocity first but hold epsilon unchanged and only switch to simultaneous inversion when vertical velocity inversion are approaching convergence. During simultaneous inversion, we observe significant acceleration in the convergence when incorporates second order information and preconditioning into inversion. We demonstrate the success of our strategy for VTI FWI using synthetic and real data examples from the Gulf of Mexico. Our results show that incorporation of VTI FWI improves migration of large offset acquisition data, and produces better focused migration images to be used in exploration, production and development of oil fields.

  4. Angular velocity of gravitational radiation from precessing binaries and the corotating frame

    NASA Astrophysics Data System (ADS)

    Boyle, Michael

    2013-05-01

    This paper defines an angular velocity for time-dependent functions on the sphere and applies it to gravitational waveforms from compact binaries. Because it is geometrically meaningful and has a clear physical motivation, the angular velocity is uniquely useful in helping to solve an important—and largely ignored—problem in models of compact binaries: the inverse problem of deducing the physical parameters of a system from the gravitational waves alone. It is also used to define the corotating frame of the waveform. When decomposed in this frame, the waveform has no rotational dynamics and is therefore as slowly evolving as possible. The resulting simplifications lead to straightforward methods for accurately comparing waveforms and constructing hybrids. As formulated in this paper, the methods can be applied robustly to both precessing and nonprecessing waveforms, providing a clear, comprehensive, and consistent framework for waveform analysis. Explicit implementations of all these methods are provided in accompanying computer code.

  5. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.

  6. Arbitrary waveform modulated pulse EPR at 200 GHz

    NASA Astrophysics Data System (ADS)

    Kaminker, Ilia; Barnes, Ryan; Han, Songi

    2017-06-01

    We report here on the implementation of arbitrary waveform generation (AWG) capabilities at ∼200 GHz into an Electron Paramagnetic Resonance (EPR) and Dynamic Nuclear Polarization (DNP) instrument platform operating at 7 T. This is achieved with the integration of a 1 GHz, 2 channel, digital to analog converter (DAC) board that enables the generation of coherent arbitrary waveforms at Ku-band frequencies with 1 ns resolution into an existing architecture of a solid state amplifier multiplier chain (AMC). This allows for the generation of arbitrary phase- and amplitude-modulated waveforms at 200 GHz with >150 mW power. We find that the non-linearity of the AMC poses significant difficulties in generating amplitude-modulated pulses at 200 GHz. We demonstrate that in the power-limited regime of ω1 < 1 MHz phase-modulated pulses were sufficient to achieve significant improvements in broadband (>10 MHz) spin manipulation in incoherent (inversion), as well as coherent (echo formation) experiments. Highlights include the improvement by one order of magnitude in inversion bandwidth compared to that of conventional rectangular pulses, as well as a factor of two in improvement in the refocused echo intensity at 200 GHz.

  7. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  8. Processing and evaluation of riverine waveforms acquired by an experimental bathymetric LiDAR

    NASA Astrophysics Data System (ADS)

    Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.

    2010-12-01

    Accurate mapping of fluvial environments with airborne bathymetric LiDAR is challenged not only by environmental characteristics but also the development and application of software routines to post-process the recorded laser waveforms. During a bathymetric LiDAR survey, the transmission of the green-wavelength laser pulses through the water column is influenced by a number of factors including turbidity, the presence of organic material, and the reflectivity of the streambed. For backscattered laser pulses returned from the river bottom and digitized by the LiDAR detector, post-processing software is needed to interpret and identify distinct inflections in the reflected waveform. Relevant features of this energy signal include the air-water interface, volume reflection from the water column itself, and, ideally, a strong return from the bottom. We discuss our efforts to acquire, analyze, and interpret riverine surveys using the USGS Experimental Advanced Airborne Research LiDAR (EAARL) in a variety of fluvial environments. Initial processing of data collected in the Trinity River, California, using the EAARL Airborne Lidar Processing Software (ALPS) highlighted the difficulty of retrieving a distinct bottom signal in deep pools. Examination of laser waveforms from these pools indicated that weak bottom reflections were often neglected by a trailing edge algorithm used by ALPS to process shallow riverine waveforms. For the Trinity waveforms, this algorithm had a tendency to identify earlier inflections as the bottom, resulting in a shallow bias. Similarly, an EAARL survey along the upper Colorado River, Colorado, also revealed the inadequacy of the trailing edge algorithm for detecting weak bottom reflections. We developed an alternative waveform processing routine by exporting digitized laser waveforms from ALPS, computing the local extrema, and fitting Gaussian curves to the convolved backscatter. Our field data indicate that these techniques improved the definition of pool areas dominated by weak bottom reflections. These processing techniques are also being tested for EAARL surveys collected along the Platte and Klamath Rivers where environmental conditions have resulted in suppressed or convolved bottom reflections.

  9. Expanding the frontiers of waveform imaging with Salvus

    NASA Astrophysics Data System (ADS)

    Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; Fichtner, A.

    2017-12-01

    Mechanical waves are natural harbingers of information. From medical ultrasound to the normal modes of Sun, wave motion is often our best window into the character of some underlying continuum. For over a century, geophysicists have been using this window to peer deep into the Earth, developing techniques that have gone on to underlie much of world's energy economy. As computers and numerical techniques have become more powerful over the last several decades, seismologists have begun to scale back classical simplifying approximations of wave propagation physics. As a result, we are now approaching the ideal of `full-waveform inversion'; maximizing the aperture of our window by taking the full complexity of wave motion into account.Salvus is a modern high-performance software suite which aims to bring recent developments in geophysical waveform inversion to new and exciting domains. In this short presentation we will look at the connections between these applications, with examples from non-destructive testing, medical imaging, seismic exploration, and (extra-) planetary seismology.

  10. A Sensitivity Analysis of Tsunami Inversions on the Number of Stations

    NASA Astrophysics Data System (ADS)

    An, Chao; Liu, Philip L.-F.; Meng, Lingsen

    2018-05-01

    Current finite-fault inversions of tsunami recordings generally adopt as many tsunami stations as possible to better constrain earthquake source parameters. In this study, inversions are evaluated by the waveform residual that measures the difference between model predictions and recordings, and the dependence of the quality of inversions on the number tsunami stations is derived. Results for the 2011 Tohoku event show that, if the tsunami stations are optimally located, the waveform residual decreases significantly with the number of stations when the number is 1 ˜ 4 and remains almost constant when the number is larger than 4, indicating that 2 ˜ 4 stations are able to recover the main characteristics of the earthquake source. The optimal location of tsunami stations is explained in the text. Similar analysis is applied to the Manila Trench in the South China Sea using artificially generated earthquakes and virtual tsunami stations. Results confirm that 2 ˜ 4 stations are necessary and sufficient to constrain the earthquake source parameters, and the optimal sites of stations are recommended in the text. The conclusion is useful for the design of new tsunami warning systems. Current strategies of tsunameter network design mainly focus on the early detection of tsunami waves from potential sources to coastal regions. We therefore recommend that, in addition to the current strategies, the waveform residual could also be taken into consideration so as to minimize the error of tsunami wave prediction for warning purposes.

  11. Acoustic Full Waveform Inversion to Characterize Near-surface Chemical Explosions

    NASA Astrophysics Data System (ADS)

    Kim, K.; Rodgers, A. J.

    2015-12-01

    Recent high-quality, atmospheric overpressure data from chemical high-explosive experiments provide a unique opportunity to characterize near-surface explosions, specifically estimating yield and source time function. Typically, yield is estimated from measured signal features, such as peak pressure, impulse, duration and/or arrival time of acoustic signals. However, the application of full waveform inversion to acoustic signals for yield estimation has not been fully explored. In this study, we apply a full waveform inversion method to local overpressure data to extract accurate pressure-time histories of acoustics sources during chemical explosions. A robust and accurate inversion technique for acoustic source is investigated using numerical Green's functions that take into account atmospheric and topographic propagation effects. The inverted pressure-time history represents the pressure fluctuation at the source region associated with the explosion, and thus, provides a valuable information about acoustic source mechanisms and characteristics in greater detail. We compare acoustic source properties (i.e., peak overpressure, duration, and non-isotropic shape) of a series of explosions having different emplacement conditions and investigate the relationship of the acoustic sources to the yields of explosions. The time histories of acoustic sources may refine our knowledge of sound-generation mechanisms of shallow explosions, and thereby allow for accurate yield estimation based on acoustic measurements. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  12. The Variability and Interpretation of Earthquake Source Mechanisms in The Geysers Geothermal Field From a Bayesian Standpoint Based on the Choice of a Noise Model

    NASA Astrophysics Data System (ADS)

    Mustać, Marija; Tkalčić, Hrvoje; Burky, Alexander L.

    2018-01-01

    Moment tensor (MT) inversion studies of events in The Geysers geothermal field mostly focused on microseismicity and found a large number of earthquakes with significant non-double-couple (non-DC) seismic radiation. Here we concentrate on the largest events in the area in recent years using a hierarchical Bayesian MT inversion. Initially, we show that the non-DC components of the MT can be reliably retrieved using regional waveform data from a small number of stations. Subsequently, we present results for a number of events and show that accounting for noise correlations can lead to retrieval of a lower isotropic (ISO) component and significantly different focal mechanisms. We compute the Bayesian evidence to compare solutions obtained with different assumptions of the noise covariance matrix. Although a diagonal covariance matrix produces a better waveform fit, inversions that account for noise correlations via an empirically estimated noise covariance matrix account for interdependences of data errors and are preferred from a Bayesian point of view. This implies that improper treatment of data noise in waveform inversions can result in fitting the noise and misinterpreting the non-DC components. Finally, one of the analyzed events is characterized as predominantly DC, while the others still have significant non-DC components, probably as a result of crack opening, which is a reasonable hypothesis for The Geysers geothermal field geological setting.

  13. Fault Identification by Unsupervised Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Mannu, U.

    2012-12-01

    Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.

  14. Computational solution of spike overlapping using data-based subtraction algorithms to resolve synchronous sympathetic nerve discharge

    PubMed Central

    Su, Chun-Kuei; Chiang, Chia-Hsun; Lee, Chia-Ming; Fan, Yu-Pei; Ho, Chiu-Ming; Shyu, Liang-Yu

    2013-01-01

    Sympathetic nerves conveying central commands to regulate visceral functions often display activities in synchronous bursts. To understand how individual fibers fire synchronously, we establish “oligofiber recording techniques” to record “several” nerve fiber activities simultaneously, using in vitro splanchnic sympathetic nerve–thoracic spinal cord preparations of neonatal rats as experimental models. While distinct spike potentials were easily recorded from collagenase-dissociated sympathetic fibers, a problem arising from synchronous nerve discharges is a higher incidence of complex waveforms resulted from spike overlapping. Because commercial softwares do not provide an explicit solution for spike overlapping, a series of custom-made LabVIEW programs incorporated with MATLAB scripts was therefore written for spike sorting. Spikes were represented as data points after waveform feature extraction and automatically grouped by k-means clustering followed by principal component analysis (PCA) to verify their waveform homogeneity. For dissimilar waveforms with exceeding Hotelling's T2 distances from the cluster centroids, a unique data-based subtraction algorithm (SA) was used to determine if they were the complex waveforms resulted from superimposing a spike pattern close to the cluster centroid with the other signals that could be observed in original recordings. In comparisons with commercial software, higher accuracy was achieved by analyses using our algorithms for the synthetic data that contained synchronous spiking and complex waveforms. Moreover, both T2-selected and SA-retrieved spikes were combined as unit activities. Quantitative analyses were performed to evaluate if unit activities truly originated from single fibers. We conclude that applications of our programs can help to resolve synchronous sympathetic nerve discharges (SND). PMID:24198782

  15. Validation of the inverse pulse wave transit time series as surrogate of systolic blood pressure in MVAR modeling.

    PubMed

    Giassi, Pedro; Okida, Sergio; Oliveira, Maurício G; Moraes, Raimes

    2013-11-01

    Short-term cardiovascular regulation mediated by the sympathetic and parasympathetic branches of the autonomic nervous system has been investigated by multivariate autoregressive (MVAR) modeling, providing insightful analysis. MVAR models employ, as inputs, heart rate (HR), systolic blood pressure (SBP) and respiratory waveforms. ECG (from which HR series is obtained) and respiratory flow waveform (RFW) can be easily sampled from the patients. Nevertheless, the available methods for acquisition of beat-to-beat SBP measurements during exams hamper the wider use of MVAR models in clinical research. Recent studies show an inverse correlation between pulse wave transit time (PWTT) series and SBP fluctuations. PWTT is the time interval between the ECG R-wave peak and photoplethysmography waveform (PPG) base point within the same cardiac cycle. This study investigates the feasibility of using inverse PWTT (IPWTT) series as an alternative input to SBP for MVAR modeling of the cardiovascular regulation. For that, HR, RFW, and IPWTT series acquired from volunteers during postural changes and autonomic blockade were used as input of MVAR models. Obtained results show that IPWTT series can be used as input of MVAR models, replacing SBP measurements in order to overcome practical difficulties related to the continuous sampling of the SBP during clinical exams.

  16. Acceleration for 2D time-domain elastic full waveform inversion using a single GPU card

    NASA Astrophysics Data System (ADS)

    Jiang, Jinpeng; Zhu, Peimin

    2018-05-01

    Full waveform inversion (FWI) is a challenging procedure due to the high computational cost related to the modeling, especially for the elastic case. The graphics processing unit (GPU) has become a popular device for the high-performance computing (HPC). To reduce the long computation time, we design and implement the GPU-based 2D elastic FWI (EFWI) in time domain using a single GPU card. We parallelize the forward modeling and gradient calculations using the CUDA programming language. To overcome the limitation of relatively small global memory on GPU, the boundary saving strategy is exploited to reconstruct the forward wavefield. Moreover, the L-BFGS optimization method used in the inversion increases the convergence of the misfit function. A multiscale inversion strategy is performed in the workflow to obtain the accurate inversion results. In our tests, the GPU-based implementations using a single GPU device achieve >15 times speedup in forward modeling, and about 12 times speedup in gradient calculation, compared with the eight-core CPU implementations optimized by OpenMP. The test results from the GPU implementations are verified to have enough accuracy by comparing the results obtained from the CPU implementations.

  17. Resolution of VTI anisotropy with elastic full-waveform inversion: theory and basic numerical examples

    NASA Astrophysics Data System (ADS)

    Podgornova, O.; Leaney, S.; Liang, L.

    2018-07-01

    Extracting medium properties from seismic data faces some limitations due to the finite frequency content of the data and restricted spatial positions of the sources and receivers. Some distributions of the medium properties make low impact on the data (including none). If these properties are used as the inversion parameters, then the inverse problem becomes overparametrized, leading to ambiguous results. We present an analysis of multiparameter resolution for the linearized inverse problem in the framework of elastic full-waveform inversion. We show that the spatial and multiparameter sensitivities are intertwined and non-sensitive properties are spatial distributions of some non-trivial combinations of the conventional elastic parameters. The analysis accounts for the Hessian information and frequency content of the data; it is semi-analytical (in some scenarios analytical), easy to interpret and enhances results of the widely used radiation pattern analysis. Single-type scattering is shown to have limited sensitivity, even for full-aperture data. Finite-frequency data lose multiparameter sensitivity at smooth and fine spatial scales. Also, we establish ways to quantify a spatial-multiparameter coupling and demonstrate that the theoretical predictions agree well with the numerical results.

  18. The Ellipticity Filter-A Proposed Solution to the Mixed Event Problem in Nuclear Seismic Discrimination

    DTIC Science & Technology

    1974-09-07

    ellipticity filter. The source waveforms are recreated by an inverse transform of those complex ampli- tudes associated with the same azimuth...terms of the three complex data points and the ellipticity. Having solved the equations for all frequency bins, the inverse transform of...Transform of those complex amplitudes associated with Source 1, yielding the signal a (t). Similarly, take the inverse Transform of all

  19. Computing the Sensitivity Kernels for 2.5-D Seismic Waveform Inversion in Heterogeneous, Anisotropic Media

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, S. A.

    2011-10-01

    2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.

  20. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  1. Estimating Extracellular Spike Waveforms from CA1 Pyramidal Cells with Multichannel Electrodes

    PubMed Central

    Molden, Sturla; Moldestad, Olve; Storm, Johan F.

    2013-01-01

    Extracellular (EC) recordings of action potentials from the intact brain are embedded in background voltage fluctuations known as the “local field potential” (LFP). In order to use EC spike recordings for studying biophysical properties of neurons, the spike waveforms must be separated from the LFP. Linear low-pass and high-pass filters are usually insufficient to separate spike waveforms from LFP, because they have overlapping frequency bands. Broad-band recordings of LFP and spikes were obtained with a 16-channel laminar electrode array (silicone probe). We developed an algorithm whereby local LFP signals from spike-containing channel were modeled using locally weighted polynomial regression analysis of adjoining channels without spikes. The modeled LFP signal was subtracted from the recording to estimate the embedded spike waveforms. We tested the method both on defined spike waveforms added to LFP recordings, and on in vivo-recorded extracellular spikes from hippocampal CA1 pyramidal cells in anaesthetized mice. We show that the algorithm can correctly extract the spike waveforms embedded in the LFP. In contrast, traditional high-pass filters failed to recover correct spike shapes, albeit produceing smaller standard errors. We found that high-pass RC or 2-pole Butterworth filters with cut-off frequencies below 12.5 Hz, are required to retrieve waveforms comparable to our method. The method was also compared to spike-triggered averages of the broad-band signal, and yielded waveforms with smaller standard errors and less distortion before and after the spike. PMID:24391714

  2. Investigating source processes of isotropic events

    NASA Astrophysics Data System (ADS)

    Chiang, Andrea

    This dissertation demonstrates the utility of the complete waveform regional moment tensor inversion for nuclear event discrimination. I explore the source processes and associated uncertainties for explosions and earthquakes under the effects of limited station coverage, compound seismic sources, assumptions in velocity models and the corresponding Green's functions, and the effects of shallow source depth and free-surface conditions. The motivation to develop better techniques to obtain reliable source mechanism and assess uncertainties is not limited to nuclear monitoring, but they also provide quantitative information about the characteristics of seismic hazards, local and regional tectonics and in-situ stress fields of the region . This dissertation begins with the analysis of three sparsely recorded events: the 14 September 1988 US-Soviet Joint Verification Experiment (JVE) nuclear test at the Semipalatinsk test site in Eastern Kazakhstan, and two nuclear explosions at the Chinese Lop Nor test site. We utilize a regional distance seismic waveform method fitting long-period, complete, three-component waveforms jointly with first-motion observations from regional stations and teleseismic arrays. The combination of long period waveforms and first motion observations provides unique discrimination of these sparsely recorded events in the context of the Hudson et al. (1989) source-type diagram. We examine the effects of the free surface on the moment tensor via synthetic testing, and apply the moment tensor based discrimination method to well-recorded chemical explosions. These shallow chemical explosions represent rather severe source-station geometry in terms of the vanishing traction issues. We show that the combined waveform and first motion method enables the unique discrimination of these events, even though the data include unmodeled single force components resulting from the collapse and blowout of the quarry face immediately following the initial explosion. In contrast, recovering the announced explosive yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique. The estimation of seismic source parameters is dependent upon having a well-calibrated velocity model to compute the Green's functions for the inverse problem. Ideally, seismic velocity models are calibrated through broadband waveform modeling, however in regions of low seismicity velocity models derived from body or surface wave tomography may be employed. Whether a velocity model is 1D or 3D, or based on broadband seismic waveform modeling or the various tomographic techniques, the uncertainty in the velocity model can be the greatest source of error in moment tensor inversion. These errors have not been fully investigated for the nuclear discrimination problem. To study the effects of unmodeled structures on the moment tensor inversion, we set up a synthetic experiment where we produce synthetic seismograms for a 3D model (Moschetti et al., 2010) and invert these data using Green's functions computed with a 1D velocity mode (Song et al., 1996) to evaluate the recoverability of input solutions, paying particular attention to biases in the isotropic component. The synthetic experiment results indicate that the 1D model assumption is valid for moment tensor inversions at periods as short as 10 seconds for the 1D western U.S. model (Song et al., 1996). The correct earthquake mechanisms and source depth are recovered with statistically insignificant isotropic components as determined by the F-test. Shallow explosions are biased by the theoretical ISO-CLVD tradeoff but the tectonic release component remains low, and the tradeoff can be eliminated with constraints from P wave first motion. Path-calibration to the 1D model can reduce non-double-couple components in earthquakes, non-isotropic components in explosions and composite sources and improve the fit to the data. When we apply the 3D model to real data, at long periods (20-50 seconds), we see good agreement in the solutions between the 1D and 3D models and slight improvement in waveform fits when using the 3D velocity model Green's functions. (Abstract shortened by ProQuest.).

  3. Convergence acceleration in scattering series and seismic waveform inversion using nonlinear Shanks transformation

    NASA Astrophysics Data System (ADS)

    Eftekhar, Roya; Hu, Hao; Zheng, Yingcai

    2018-06-01

    Iterative solution process is fundamental in seismic inversions, such as in full-waveform inversions and some inverse scattering methods. However, the convergence could be slow or even divergent depending on the initial model used in the iteration. We propose to apply Shanks transformation (ST for short) to accelerate the convergence of the iterative solution. ST is a local nonlinear transformation, which transforms a series locally into another series with an improved convergence property. ST works by separating the series into a smooth background trend called the secular term versus an oscillatory transient term. ST then accelerates the convergence of the secular term. Since the transformation is local, we do not need to know all the terms in the original series which is very important in the numerical implementation. The ST performance was tested numerically for both the forward Born series and the inverse scattering series (ISS). The ST has been shown to accelerate the convergence in several examples, including three examples of forward modeling using the Born series and two examples of velocity inversion based on a particular type of the ISS. We observe that ST is effective in accelerating the convergence and it can also achieve convergence even for a weakly divergent scattering series. As such, it provides a useful technique to invert for a large-contrast medium perturbation in seismic inversion.

  4. Crustal Structure Beneath Taiwan Using Frequency-band Inversion of Receiver Function Waveforms

    NASA Astrophysics Data System (ADS)

    Tomfohrde, D. A.; Nowack, R. L.

    Receiver function analysis is used to determine local crustal structure beneath Taiwan. We have performed preliminary data processing and polarization analysis for the selection of stations and events and to increase overall data quality. Receiver function analysis is then applied to data from the Taiwan Seismic Network to obtain radial and transverse receiver functions. Due to the limited azimuthal coverage, only the radial receiver functions are analyzed in terms of horizontally layered crustal structure for each station. In order to improve convergence of the receiver function inversion, frequency-band inversion (FBI) is implemented, in which an iterative inversion procedure with sequentially higher low-pass corner frequencies is used to stabilize the waveform inversion. Frequency-band inversion is applied to receiver functions at six stations of the Taiwan Seismic Network. Initial 20-layer crustal models are inverted for using prior tomographic results for the initial models. The resulting 20-1ayer models are then simplified to 4 to 5 layer models and input into an alternating depth and velocity frequency-band inversion. For the six stations investigated, the resulting simplified models provide an average estimate of 38 km for the Moho thickness surrounding the Central Range of Taiwan. Also, the individual station estimates compare well with the recent tomographic model of and the refraction results of Rau and Wu (1995) and the refraction results of Ma and Song (1997).

  5. Application of Adjoint Method and Spectral-Element Method to Tomographic Inversion of Regional Seismological Structure Beneath Japanese Islands

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.

    2014-12-01

    Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.

  6. Three-Dimensional Anisotropic Acoustic and Elastic Full-Waveform Seismic Inversion

    NASA Astrophysics Data System (ADS)

    Warner, M.; Morgan, J. V.

    2013-12-01

    Three-dimensional full-waveform inversion is a high-resolution, high-fidelity, quantitative, seismic imaging technique that has advanced rapidly within the oil and gas industry. The method involves the iterative improvement of a starting model using a series of local linearized updates to solve the full non-linear inversion problem. During the inversion, forward modeling employs the full two-way three-dimensional heterogeneous anisotropic acoustic or elastic wave equation to predict the observed raw field data, wiggle-for-wiggle, trace-by-trace. The method is computationally demanding; it is highly parallelized, and runs on large multi-core multi-node clusters. Here, we demonstrate what can be achieved by applying this newly practical technique to several high-density 3D seismic datasets that were acquired to image four contrasting sedimentary targets: a gas cloud above an oil reservoir, a radially faulted dome, buried fluvial channels, and collapse structures overlying an evaporate sequence. We show that the resulting anisotropic p-wave velocity models match in situ measurements in deep boreholes, reproduce detailed structure observed independently on high-resolution seismic reflection sections, accurately predict the raw seismic data, simplify and sharpen reverse-time-migrated reflection images of deeper horizons, and flatten Kirchhoff-migrated common-image gathers. We also show that full-elastic 3D full-waveform inversion of pure pressure data can generate a reasonable shear-wave velocity model for one of these datasets. For two of the four datasets, the inclusion of significant transversely isotropic anisotropy with a vertical axis of symmetry was necessary in order to fit the kinematics of the field data properly. For the faulted dome, the full-waveform-inversion p-wave velocity model recovers the detailed structure of every fault that can be seen on coincident seismic reflection data. Some of the individual faults represent high-velocity zones, some represent low-velocity zones, some have more-complex internal structure, and some are visible merely as offsets between two regions with contrasting velocity. Although this has not yet been demonstrated quantitatively for this dataset, it seems likely that at least some of this fine structure in the recovered velocity model is related to the detailed lithology, strain history and fluid properties within the individual faults. We have here applied this technique to seismic data that were acquired by the extractive industries, however this inversion scheme is immediately scalable and applicable to a much wider range of problems given sufficient quality and density of observed data. Potential targets range from shallow magma chambers beneath active volcanoes, through whole-crustal sections across plate boundaries, to regional and whole-Earth models.

  7. Potency backprojection

    NASA Astrophysics Data System (ADS)

    Okuwaki, R.; Kasahara, A.; Yagi, Y.

    2017-12-01

    The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.

  8. Viscoacoustic anisotropic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Qu, Yingming; Li, Zhenchun; Huang, Jianping; Li, Jinli

    2017-01-01

    A viscoacoustic vertical transverse isotropic (VTI) quasi-differential wave equation, which takes account for both the viscosity and anisotropy of media, is proposed for wavefield simulation in this study. The finite difference method is used to solve the equations, for which the attenuation terms are solved in the wavenumber domain, and all remaining terms in the time-space domain. To stabilize the adjoint wavefield, robust regularization operators are applied to the wave equation to eliminate the high-frequency component of the numerical noise produced during the backward propagation of the viscoacoustic wavefield. Based on these strategies, we derive the corresponding gradient formula and implement a viscoacoustic VTI full waveform inversion (FWI). Numerical tests verify that our proposed viscoacoustic VTI FWI can produce accurate and stable inversion results for viscoacoustic VTI data sets. In addition, we test our method's sensitivity to velocity, Q, and anisotropic parameters. Our results show that the sensitivity to velocity is much higher than that to Q and anisotropic parameters. As such, our proposed method can produce acceptable inversion results as long as the Q and anisotropic parameters are within predefined thresholds.

  9. Comparison of Retracking Algorithms Using Airborne Radar and Laser Altimeter Measurements of the Greenland Ice Sheet

    NASA Technical Reports Server (NTRS)

    Ferraro, Ellen J.; Swift, Calvin T.

    1995-01-01

    This paper compares four continental ice sheet radar altimeter retracking algorithms using airborne radar and laser altimeter data taken over the Greenland ice sheet in 1991. The refurbished Advanced Application Flight Experiment (AAFE) airborne radar altimeter has a large range window and stores the entire return waveform during flight. Once the return waveforms are retracked, or post-processed to obtain the most accurate altitude measurement possible, they are compared with the high-precision Airborne Oceanographic Lidar (AOL) altimeter measurements. The AAFE waveforms show evidence of varying degrees of both surface and volume scattering from different regions of the Greenland ice sheet. The AOL laser altimeter, however, obtains a return only from the surface of the ice sheet. Retracking altimeter waveforms with a surface scattering model results in a good correlation with the laser measurements in the wet and dry-snow zones, but in the percolation region of the ice sheet, the deviation between the two data sets is large due to the effects of subsurface and volume scattering. The Martin et al model results in a lower bias than the surface scattering model, but still shows an increase in the noise level in the percolation zone. Using an Offset Center of Gravity algorithm to retrack altimeter waveforms results in measurements that are only slightly affected by subsurface and volume scattering and, despite a higher bias, this algorithm works well in all regions of the ice sheet. A cubic spline provides retracked altitudes that agree with AOL measurements over all regions of Greenland. This method is not sensitive to changes in the scattering mechanisms of the ice sheet and it has the lowest noise level and bias of all the retracking methods presented.

  10. Source process of a long-period event at Kilauea volcano, Hawaii

    USGS Publications Warehouse

    Kumagai, H.; Chouet, B.A.; Dawson, P.B.

    2005-01-01

    We analyse a long-period (LP) event observed by a dense seismic network temporarily operated at Kilauea volcano, Hawaii, in 1996. We systematically perform spectral analyses, waveform inversions and forward modeling of the LP event to quantify its source process. Spectral analyses identify two dominant spectral frequencies at 0.6 and 1.3 Hz with associated Q values in the range 10-20. Results from waveform inversions assuming six moment-tensor and three single-force components point to the resonance of a horizontal crack located at a depth of approximately 150 m near the northeastern rim of the Halemaumau pit crater. Waveform simulations based on a fluid-filled crack model suggest that the observed frequencies and Q values can be explained by a crack filled with a hydrothermal fluid in the form of either bubbly water or steam. The shallow hydrothermal crack located directly above the magma conduit may have been heated by volcanic gases leaking from the conduit. The enhanced flux of heat raised the overall pressure of the hydrothermal fluid in the crack and induced a rapid discharge of fluid from the crack, which triggered the acoustic vibrations of the resonator generating the LP waveform. The present study provides further support to the idea that LP events originate in the resonance of a crack. ?? 2005 RAS.

  11. Joint inversion of marine MT and CSEM data over Gemini prospect, Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Constable, S.; Orange, A. S.; Key, K.

    2013-12-01

    In 2003 we tested a prototype marine controlled-source electromagnetic (CSEM) transmitter over the Gemini salt body in the Gulf of Mexico, collecting one line of data over 15 seafloor receiver instruments using the Cox waveform with a 0.25 Hz fundamental, yielding 3 usable frequencies. Transmission current was 95 amps on a 150 m antenna. We had previously collected 16 sites of marine magnetotelluric (MT) data along this line during the development of broadband marine MT as a tool for mapping salt geometry. Recently we commissioned a finite element code capable of joint CSEM and MT 2D inversion incorporating bathymetry and anisotropy, and this heritage data set provided an opportunity to explore such inversions with real data. We reprocessed the CSEM data to obtain objective error estimates and inverted single frequency CSEM, multi-frequency CSEM, MT, and joint MT and CSEM data sets for a variety of target misfits, using the Occam regularized inversion algorithm. As expected, MT-only inversions produce a smoothed image of the salt and a resistive basement at 9 km depth. The CSEM data image a conductive cap over the salt body and have little sensitivity to the salt or structure at depths beyond about 1500 m below seafloor. However, the joint inversion yields more than the sum of the parts - the outline of the salt body is much sharper and there is much more structural detail even at depths beyond the resolution of the CSEM data. As usual, model complexity greatly depends on target misfit, and even with well-estimated errors the choice of misfit becomes a somewhat subjective decision. Our conclusion is a familiar one; more data are always good.

  12. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    NASA Astrophysics Data System (ADS)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  13. Unity power factor converter

    NASA Technical Reports Server (NTRS)

    Wester, Gene W. (Inventor)

    1980-01-01

    A unity power factor converter capable of effecting either inversion (dc-to-dc) or rectification (ac-to-dc), and capable of providing bilateral power control from a DC source (or load) through an AC transmission line to a DC load (or source) for power flow in either direction, is comprised of comparators for comparing the AC current i with an AC signal i.sub.ref (or its phase inversion) derived from the AC ports to generate control signals to operate a switch control circuit for high speed switching to shape the AC current waveform to a sine waveform, and synchronize it in phase and frequency with the AC voltage at the AC ports, by selectively switching the connections to a series inductor as required to increase or decrease the current i.

  14. NEW APPLICATIONS IN THE INVERSION OF ACOUSTIC FULL WAVEFORM LOGS - RELATING MODE EXCITATION TO LITHOLOGY.

    USGS Publications Warehouse

    Paillet, Frederick L.; Cheng, C.H.; Meredith, J.A.

    1987-01-01

    Existing techniques for the quantitative interpretation of waveform data have been based on one of two fundamental approaches: (1) simultaneous identification of compressional and shear velocities; and (2) least-squares minimization of the difference between experimental waveforms and synthetic seismograms. Techniques based on the first approach do not always work, and those based on the second seem too numerically cumbersome for routine application during data processing. An alternative approach is tested here, in which synthetic waveforms are used to predict relative mode excitation in the composite waveform. Synthetic waveforms are generated for a series of lithologies ranging from hard, crystalline rocks (Vp equals 6. 0 km/sec. and Poisson's ratio equals 0. 20) to soft, argillaceous sediments (Vp equals 1. 8 km/sec. and Poisson's ratio equals 0. 40). The series of waveforms illustrates a continuous change within this range of rock properties. Mode energy within characteristic velocity windows is computed for each of the modes in the set of synthetic waveforms. The results indicate that there is a consistent variation in mode excitation in lithology space that can be used to construct a unique relationship between relative mode excitation and lithology.

  15. On estimating the phase of a periodic waveform in additive Gaussian noise, part 3

    NASA Technical Reports Server (NTRS)

    Rauch, L. L.

    1991-01-01

    Motivated by advances in signal processing technology that support more complex algorithms, researchers have taken a new look at the problem of estimating the phase and other parameters of a nearly periodic waveform in additive Gaussian noise, based on observation during a given time interval. Parts 1 and 2 are very briefly reviewed. In part 3, the actual performances of some of the highly nonlinear estimation algorithms of parts 1 and 2 are evaluated by numerical simulation using Monte Carlo techniques.

  16. Evaluating coastal sea surface heights based on a novel sub-waveform approach using sparse representation and conditional random fields

    NASA Astrophysics Data System (ADS)

    Uebbing, Bernd; Roscher, Ribana; Kusche, Jürgen

    2016-04-01

    Satellite radar altimeters allow global monitoring of mean sea level changes over the last two decades. However, coastal regions are less well observed due to influences on the returned signal energy by land located inside the altimeter footprint. The altimeter emits a radar pulse, which is reflected at the nadir-surface and measures the two-way travel time, as well as the returned energy as a function of time, resulting in a return waveform. Over the open ocean the waveform shape corresponds to a theoretical model which can be used to infer information on range corrections, significant wave height or wind speed. However, in coastal areas the shape of the waveform is significantly influenced by return signals from land, located in the altimeter footprint, leading to peaks which tend to bias the estimated parameters. Recently, several approaches dealing with this problem have been published, including utilizing only parts of the waveform (sub-waveforms), estimating the parameters in two steps or estimating additional peak parameters. We present a new approach in estimating sub-waveforms using conditional random fields (CRF) based on spatio-temporal waveform information. The CRF piece-wise approximates the measured waveforms based on a pre-derived dictionary of theoretical waveforms for various combinations of the geophysical parameters; neighboring range gates are likely to be assigned to the same underlying sub-waveform model. Depending on the choice of hyperparameters in the CRF estimation, the classification into sub-waveforms can either be more fine or coarse resulting in multiple sub-waveform hypotheses. After the sub-waveforms have been detected, existing retracking algorithms can be applied to derive water heights or other desired geophysical parameters from particular sub-waveforms. To identify the optimal heights from the multiple hypotheses, instead of utilizing a known reference height, we apply a Dijkstra-algorithm to find the "shortest path" of all possible heights. We apply our approach to Jason-2 data in different coastal areas, such as the Bangladesh coast or in the North Sea and compare our sea surface heights to various existing retrackers. Using the sub-waveform approach, we are able to derive meaningful water heights up to a few kilometers off the coast, where conventional retrackers, such as the standard ocean retracker, no longer provide useful data.

  17. a method of gravity and seismic sequential inversion and its GPU implementation

    NASA Astrophysics Data System (ADS)

    Liu, G.; Meng, X.

    2011-12-01

    In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing, we use the GPU to accelerate our gravity and seismic inversion. Taking the gravity inversion as an example, its kernels are gravity forward simulation and correlation imaging, after the parallelization in GPU, in 3D case,the inversion module, the original five CPU loops are reduced to three,the forward module the original five CPU loops are reduced to two. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).

  18. 3D magnetotelluric inversion system with static shift correction and theoretical assessment in oil and gas exploration

    NASA Astrophysics Data System (ADS)

    Dong, H.; Kun, Z.; Zhang, L.

    2015-12-01

    This magnetotelluric (MT) system contains static shift correction and 3D inversion. The correction method is based on the data study on 3D forward modeling and field test. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with zero-cost, and avoids the additional field work and indoor processing with good results shown in Figure 1a-e. Figure 1a shows a normal model (I) without any local heterogeneity. Figure 1b shows a static-shifted model (II) with two local heterogeneous bodies (10 and 1000 ohm.m). Figure 1c is the inversion result (A) for the synthetic data generated from model I. Figure 1d is the inversion result (B) for the static-shifted data generated from model II. Figure 1e is the inversion result (C) for the static-shifted data from model II, but with static shift correction. The results show that the correction method is useful. The 3D inversion algorithm is improved base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the frequency based parallel structure, improved the computational efficiency, reduced the memory of computer, added the topographic and marine factors, and added the constraints of geology and geophysics. So the 3D inversion could even work in PAD with high efficiency and accuracy. The application example of theoretical assessment in oil and gas exploration is shown in Figure 1f-i. The synthetic geophysical model consists of five layers (from top to downwards): shale, limestone, gas, oil, groundwater and limestone overlying a basement rock. Figure 1f-g show the 3D model and central profile. Figure 1h shows the centrel section of 3D inversion, the resultsd show a high degree of reduction in difference on the synthetic model. Figure 1i shows the seismic waveform reflects the interfaces of every layer overall, but the relative positions of the interface of the two-way travel time vary, and the interface between limestone and oil at the sides of the section is not reflected. So 3-D MT can make up for the deficiency of the seismic results such as the fake sync-phase axis and multiple waves.

  19. Seismic waveform inversion for core-mantle boundary topography

    NASA Astrophysics Data System (ADS)

    Colombi, Andrea; Nissen-Meyer, Tarje; Boschi, Lapo; Giardini, Domenico

    2014-07-01

    The topography of the core-mantle boundary (CMB) is directly linked to the dynamics of both the mantle and the outer core, although it is poorly constrained and understood. Recent studies have produced topography models with mutual agreement up to degree 2. A broad-band waveform inversion strategy is introduced and applied here, with relatively low computational cost and based on a first-order Born approximation. Its performance is validated using synthetic waveforms calculated in theoretical earth models that include different topography patterns with varying lateral wavelengths, from 600 to 2500 km, and magnitudes (˜10 km peak-to-peak). The source-receiver geometry focuses mainly on the Pdiff, PKP, PcP and ScS phases. The results show that PKP branches, PcP and ScS generally perform well and in a similar fashion, while Pdiff yields unsatisfactory results. We investigate also how 3-D mantle correction influences the output models, and find that despite the disturbance introduced, the models recovered do not appear to be biased, provided that the 3-D model is correct. Using cross-correlated traveltimes, we derive new topography models from both P and S waves. The static corrections used to remove the mantle effect are likely to affect the inversion, compromising the agreement between models derived from P and S data. By modelling traveltime residuals starting from sensitivity kernels, we show how the simultaneous use of volumetric and boundary kernels can reduce the bias coming from mantle structures. The joint inversion approach should be the only reliable method to invert for CMB topography using absolute cross-correlation traveltimes.

  20. Models of brachial to finger pulse wave distortion and pressure decrement.

    PubMed

    Gizdulich, P; Prentza, A; Wesseling, K H

    1997-03-01

    To model the pulse wave distortion and pressure decrement occurring between brachial and finger arteries. Distortion reversion and decrement correction were also our aims. Brachial artery pressure was recorded intra-arterially and finger pressure was recorded non-invasively by the Finapres technique in 53 adult human subjects. Mean pressure was subtracted from each pressure waveform and Fourier analysis applied to the pulsations. A distortion model was estimated for each subject and averaged over the group. The average inverse model was applied to the full finger pressure waveform. The pressure decrement was modelled by multiple regression on finger systolic and diastolic levels. Waveform distortion could be described by a general, frequency dependent model having a resonance at 7.3 Hz. The general inverse model has an anti-resonance at this frequency. It converts finger to brachial pulsations thereby reducing average waveform distortion from 9.7 (s.d. 3.2) mmHg per sample for the finger pulse to 3.7 (1.7) mmHg for the converted pulse. Systolic and diastolic level differences between finger and brachial arterial pressures changed from -4 (15) and -8 (11) to +8 (14) and +8 (12) mmHg, respectively, after inverse modelling, with pulse pressures correct on average. The pressure decrement model reduced both the mean and the standard deviation of systolic and diastolic level differences to 0 (13) and 0 (8) mmHg. Diastolic differences were thus reduced most. Brachial to finger pulse wave distortion due to wave reflection in arteries is almost identical in all subjects and can be modelled by a single resonance. The pressure decrement due to flow in arteries is greatest for high pulse pressures superimposed on low means.

  1. Kinematic Source Rupture Process of the 2008 Iwate-Miyagi Nairiku Earthquake, a MW6.9 thrust earthquake in northeast Japan, using Strong Motion Data

    NASA Astrophysics Data System (ADS)

    Asano, K.; Iwata, T.

    2008-12-01

    The 2008 Iwate-Miyagi Nairiku earthquake (MJMA7.2) on June 14, 2008, is a thrust type inland crustal earthquake, which occurred in northeastern Honshu, Japan. In order to see strong motion generation process of this event, the source rupture process is estimated by the kinematic waveform inversion using strong motion data. Strong motion data of the K-NET and KiK-net stations and Aratozawa Dam are used. These stations are located 3-94 km from the epicenter. Original acceleration time histories are integrated into velocity and band- pass filtered between 0.05 and 1 Hz. For obtaining the detailed source rupture process, appropriate velocity structure model for Green's functions should be used. We estimated one dimensional velocity structure model for each strong motion station by waveform modeling of aftershock records. The elastic wave velocity, density, and Q-values for four sedimentary layers are assumed following previous studies. The thickness of each sedimentary layer depends on the station, which is estimated to fit the observed aftershock's waveforms by the optimization using the genetic algorithm. A uniform layered structure model is assumed for crust and upper mantle below the seismic bedrock. We succeeded to get a reasonable velocity structure model for each station to give a good fit of the main S-wave part in the observation of aftershocks. The source rupture process of the mainshock is estimated by the linear kinematic waveform inversion using multiple time windows (Hartzell and Heaton, 1983). A fault plane model is assumed following the moment tensor solution by F-net, NIED. The strike and dip angle is 209° and 51°, respectively. The rupture starting point is fixed at the hypocenter located by the JMA. The obtained source model shows a large slip area in the shallow portion of the fault plane approximately 6 km southwest of the hypocenter. The rupture of the asperity finishes within about 9 s. This large slip area corresponds to the area with surface break reported by the field survey group (e.g., AIST/GSJ, 2008), which supports the existence of the large slip close to the ground surface. But, most of surface offset found by the field survey are less than 0.5 m whereas the slip amount of the shallow asperity of the source inversion result is 3-4 m. In north of the hypocenter, the estimated slip amount is small. Slip direction is almost pure dip-slip for the entire fault (Northwest side goes up against southeast side). Total seismic moment is 2.6× 1019 Nm (MW 6.9). Acknowledgments: Strong motion data of K-NET and KiK-net operated by the National Research Institute for Earth Science and Disaster Prevention are used. Strong motion data of Aratozawa Dam obtained by Miyagi prefecture government is also used in the study.

  2. Full Seismic Waveform Tomography of the Japan region using Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Steptoe, Hamish; Fichtner, Andreas; Rickers, Florian; Trampert, Jeannot

    2013-04-01

    We present a full-waveform tomographic model of the Japan region based on spectral-element wave propagation, adjoint techniques and seismic data from dense station networks. This model is intended to further our understanding of both the complex regional tectonics and the finite rupture processes of large earthquakes. The shallow Earth structure of the Japan region has been the subject of considerable tomographic investigation. The islands of Japan exist in an area of significant plate complexity: subduction related to the Pacific and Philippine Sea plates is responsible for the majority of seismicity and volcanism of Japan, whilst smaller micro-plates in the region, including the Okhotsk, and Okinawa and Amur, part of the larger North America and Eurasia plates respectively, contribute significant local intricacy. In response to the need to monitor and understand the motion of these plates and their associated faults, numerous seismograph networks have been established, including the 768 station high-sensitivity Hi-net network, 84 station broadband F-net and the strong-motion seismograph networks K-net and KiK-net in Japan. We also include the 55 station BATS network of Taiwan. We use this exceptional coverage to construct a high-resolution model of the Japan region from the full-waveform inversion of over 15,000 individual component seismograms from 53 events that occurred between 1997 and 2012. We model these data using spectral-element simulations of seismic wave propagation at a regional scale over an area from 120°-150°E and 20°-50°N to a depth of around 500 km. We quantify differences between observed and synthetic waveforms using time-frequency misfits allowing us to separate both phase and amplitude measurements whilst exploiting the complete waveform at periods of 15-60 seconds. Fréchet kernels for these misfits are calculated via the adjoint method and subsequently used in an iterative non-linear conjugate-gradient optimization. Finally, we employ custom smoothing algorithms to remove the singularities of the Fréchet kernels and artifacts introduced by the heterogeneous coverage in oceanic regions of the model.

  3. A comparison of waveform processing algorithms for single-wavelength LiDAR bathymetry

    NASA Astrophysics Data System (ADS)

    Wang, Chisheng; Li, Qingquan; Liu, Yanxiong; Wu, Guofeng; Liu, Peng; Ding, Xiaoli

    2015-03-01

    Due to the low-cost and lightweight units, single-wavelength LiDAR bathymetric systems are an ideal option for shallow-water (<12 m) bathymetry. However, one disadvantage of such systems is the lack of near-infrared and Raman channels, which results in difficulties in extracting the water surface. Therefore, the choice of a suitable waveform processing method is extremely important to guarantee the accuracy of the bathymetric retrieval. In this paper, we test six algorithms for single-wavelength bathymetric waveform processing, i.e. peak detection (PD), the average square difference function (ASDF), Gaussian decomposition (GD), quadrilateral fitting (QF), Richardson-Lucy deconvolution (RLD), and Wiener filter deconvolution (WD). To date, most of these algorithms have previously only been applied in topographic LiDAR waveforms captured over land. A simulated dataset and an Optech Aquarius dataset were used to assess the algorithms, with the focus being on their capability of extracting the depth and the bottom response. The influences of a number of water and equipment parameters were also investigated by the use of a Monte Carlo method. The results showed that the RLD method had a superior performance in terms of a high detection rate and low errors in the retrieved depth and magnitude. The attenuation coefficient, noise level, water depth, and bottom reflectance had significant influences on the measurement error of the retrieved depth, while the effects of scan angle and water surface roughness were not so obvious.

  4. Characterization of moderate ash-and-gas explosions at Santiaguito volcano, Guatemala, from infrasound waveform inversion and thermal infrared measurements

    NASA Astrophysics Data System (ADS)

    Angelis, S. De; Lamb, O. D.; Lamur, A.; Hornby, A. J.; von Aulock, F. W.; Chigna, G.; Lavallée, Y.; Rietbrock, A.

    2016-06-01

    The rapid discharge of gas and rock fragments during volcanic eruptions generates acoustic infrasound. Here we present results from the inversion of infrasound signals associated with small and moderate gas-and-ash explosions at Santiaguito volcano, Guatemala, to retrieve the time history of mass eruption rate at the vent. Acoustic waveform inversion is complemented by analyses of thermal infrared imagery to constrain the volume and rise dynamics of the eruption plume. Finally, we combine results from the two methods in order to assess the bulk density of the erupted mixture, constrain the timing of the transition from a momentum-driven jet to a buoyant plume, and to evaluate the relative volume fractions of ash and gas during the initial thrust phase. Our results demonstrate that eruptive plumes associated with small-to-moderate size explosions at Santiaguito only carry minor fractions of ash, suggesting that these events may not involve extensive magma fragmentation in the conduit.

  5. Characterization of moderate ash-and-gas explosions at Santiaguito volcano, Guatemala, from infrasound waveform inversion and thermal infrared measurements.

    PubMed

    Angelis, S De; Lamb, O D; Lamur, A; Hornby, A J; von Aulock, F W; Chigna, G; Lavallée, Y; Rietbrock, A

    2016-06-28

    The rapid discharge of gas and rock fragments during volcanic eruptions generates acoustic infrasound. Here we present results from the inversion of infrasound signals associated with small and moderate gas-and-ash explosions at Santiaguito volcano, Guatemala, to retrieve the time history of mass eruption rate at the vent. Acoustic waveform inversion is complemented by analyses of thermal infrared imagery to constrain the volume and rise dynamics of the eruption plume. Finally, we combine results from the two methods in order to assess the bulk density of the erupted mixture, constrain the timing of the transition from a momentum-driven jet to a buoyant plume, and to evaluate the relative volume fractions of ash and gas during the initial thrust phase. Our results demonstrate that eruptive plumes associated with small-to-moderate size explosions at Santiaguito only carry minor fractions of ash, suggesting that these events may not involve extensive magma fragmentation in the conduit.

  6. Hemodynamic Assessment of Compliance of Pre-Stressed Pulmonary Valve-Vasculature in Patient Specific Geometry Using an Inverse Algorithm

    NASA Astrophysics Data System (ADS)

    Hebbar, Ullhas; Paul, Anup; Banerjee, Rupak

    2016-11-01

    Image based modeling is finding increasing relevance in assisting diagnosis of Pulmonary Valve-Vasculature Dysfunction (PVD) in congenital heart disease patients. This research presents compliant artery - blood interaction in a patient specific Pulmonary Artery (PA) model. This is an improvement over our previous numerical studies which assumed rigid walled arteries. The impedance of the arteries and the energy transfer from the Right Ventricle (RV) to PA is governed by compliance, which in turn is influenced by the level of pre-stress in the arteries. In order to evaluate the pre-stress, an inverse algorithm was developed using an in-house script written in MATLAB and Python, and implemented using the Finite Element Method (FEM). This analysis used a patient specific material model developed by our group, in conjunction with measured pressure (invasive) and velocity (non-invasive) values. The analysis was performed on an FEM solver, and preliminary results indicated that the Main PA (MPA) exhibited higher compliance as well as increased hysteresis over the cardiac cycle when compared with the Left PA (LPA). The computed compliance values for the MPA and LPA were 14% and 34% lesser than the corresponding measured values. Further, the computed pressure drop and flow waveforms were in close agreement with the measured values. In conclusion, compliant artery - blood interaction models of patient specific geometries can play an important role in hemodynamics based diagnosis of PVD.

  7. A Software Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.

    2007-01-01

    Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.

  8. Source rupture process of the 2016 Kaikoura, New Zealand earthquake estimated from the kinematic waveform inversion of strong-motion data

    NASA Astrophysics Data System (ADS)

    Zheng, Ao; Wang, Mingfeng; Yu, Xiangwei; Zhang, Wenbo

    2018-03-01

    On 2016 November 13, an Mw 7.8 earthquake occurred in the northeast of the South Island of New Zealand near Kaikoura. The earthquake caused severe damages and great impacts on local nature and society. Referring to the tectonic environment and defined active faults, the field investigation and geodetic evidence reveal that at least 12 fault sections ruptured in the earthquake, and the focal mechanism is one of the most complicated in historical earthquakes. On account of the complexity of the source rupture, we propose a multisegment fault model based on the distribution of surface ruptures and active tectonics. We derive the source rupture process of the earthquake using the kinematic waveform inversion method with the multisegment fault model from strong-motion data of 21 stations (0.05-0.35 Hz). The inversion result suggests the rupture initiates in the epicentral area near the Humps fault, and then propagates northeastward along several faults, until the offshore Needles fault. The Mw 7.8 event is a mixture of right-lateral strike and reverse slip, and the maximum slip is approximately 19 m. The synthetic waveforms reproduce the characteristics of the observed ones well. In addition, we synthesize the coseismic offsets distribution of the ruptured region from the slips of upper subfaults in the fault model, which is roughly consistent with the surface breaks observed in the field survey.

  9. Characterization of a viscoelastic heterogeneous object with an effective model by nonlinear full waveform inversion

    NASA Astrophysics Data System (ADS)

    Mesgouez, A.

    2018-05-01

    The determination of equivalent viscoelastic properties of heterogeneous objects remains challenging in various scientific fields such as (geo)mechanics, geophysics or biomechanics. The present investigation addresses the issue of the identification of effective constitutive properties of a binary object by using a nonlinear and full waveform inversion scheme. The inversion process, without any regularization technique or a priori information, aims at minimizing directly the discrepancy between the full waveform responses of a bi-material viscoelastic cylindrical object and its corresponding effective homogeneous object. It involves the retrieval of five constitutive equivalent parameters. Numerical simulations are performed in a laboratory-scale two-dimensional configuration: a transient acoustic plane wave impacts the object and the diffracted fluid pressure, solid stress or velocity component fields are determined using a semi-analytical approach. Results show that the retrieval of the density and of the real parts of both the compressional and the shear wave velocities have been carried out successfully regarding the number and location of sensors, the type of sensors, the size of the searching space, the frequency range of the incident plane pressure wave, and the change in the geometric or mechanical constitution of the bi-material object. The retrieval of the imaginary parts of the wave velocities can reveal in some cases the limitations of the proposed approach.

  10. A Modified Normalization Technique for Frequency-Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Hwang, J.; Jeong, G.; Min, D. J.; KIM, S.; Heo, J. Y.

    2016-12-01

    Full waveform inversion (FWI) is a technique to estimate subsurface material properties minimizing the misfit function built with residuals between field and modeled data. To achieve computational efficiency, FWI has been performed in the frequency domain by carrying out modeling in the frequency domain, whereas observed data (time-series data) are Fourier-transformed.One of the main drawbacks of seismic FWI is that it easily gets stuck in local minima because of lacking of low-frequency data. To compensate for this limitation, damped wavefields are used, as in the Laplace-domain waveform inversion. Using damped wavefield in FWI plays a role in generating low-frequency components and help recover long-wavelength structures. With these newly generated low-frequency components, we propose a modified frequency-normalization technique, which has an effect of boosting contribution of low-frequency components to model parameter update.In this study, we introduce the modified frequency-normalization technique which effectively amplifies low-frequency components of damped wavefields. Our method is demonstrated for synthetic data for the SEG/EAGE salt model. AcknowledgementsThis work was supported by the Korea Institute of Energy Technology Evaluation and Planning(KETEP) and the Ministry of Trade, Industry & Energy(MOTIE) of the Republic of Korea (No. 20168510030830) and by the Dual Use Technology Program, granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea.

  11. Stability and uncertainty of finite-fault slip inversions: Application to the 2004 Parkfield, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.; Mendoza, C.; Ji, C.; Larson, K.M.

    2007-01-01

    The 2004 Parkfield, California, earthquake is used to investigate stability and uncertainty aspects of the finite-fault slip inversion problem with different a priori model assumptions. We utilize records from 54 strong ground motion stations and 13 continuous, 1-Hz sampled, geodetic instruments. Two inversion procedures are compared: a linear least-squares subfault-based methodology and a nonlinear global search algorithm. These two methods encompass a wide range of the different approaches that have been used to solve the finite-fault slip inversion problem. For the Parkfield earthquake and the inversion of velocity or displacement waveforms, near-surface related site response (top 100 m, frequencies above 1 Hz) is shown to not significantly affect the solution. Results are also insensitive to selection of slip rate functions with similar duration and to subfault size if proper stabilizing constraints are used. The linear and nonlinear formulations yield consistent results when the same limitations in model parameters are in place and the same inversion norm is used. However, the solution is sensitive to the choice of inversion norm, the bounds on model parameters, such as rake and rupture velocity, and the size of the model fault plane. The geodetic data set for Parkfield gives a slip distribution different from that of the strong-motion data, which may be due to the spatial limitation of the geodetic stations and the bandlimited nature of the strong-motion data. Cross validation and the bootstrap method are used to set limits on the upper bound for rupture velocity and to derive mean slip models and standard deviations in model parameters. This analysis shows that slip on the northwestern half of the Parkfield rupture plane from the inversion of strong-motion data is model dependent and has a greater uncertainty than slip near the hypocenter.

  12. Low frequency full waveform seismic inversion within a tree based Bayesian framework

    NASA Astrophysics Data System (ADS)

    Ray, Anandaroop; Kaplan, Sam; Washbourne, John; Albertin, Uwe

    2018-01-01

    Limited illumination, insufficient offset, noisy data and poor starting models can pose challenges for seismic full waveform inversion. We present an application of a tree based Bayesian inversion scheme which attempts to mitigate these problems by accounting for data uncertainty while using a mildly informative prior about subsurface structure. We sample the resulting posterior model distribution of compressional velocity using a trans-dimensional (trans-D) or Reversible Jump Markov chain Monte Carlo method in the wavelet transform domain of velocity. This allows us to attain rapid convergence to a stationary distribution of posterior models while requiring a limited number of wavelet coefficients to define a sampled model. Two synthetic, low frequency, noisy data examples are provided. The first example is a simple reflection + transmission inverse problem, and the second uses a scaled version of the Marmousi velocity model, dominated by reflections. Both examples are initially started from a semi-infinite half-space with incorrect background velocity. We find that the trans-D tree based approach together with parallel tempering for navigating rugged likelihood (i.e. misfit) topography provides a promising, easily generalized method for solving large-scale geophysical inverse problems which are difficult to optimize, but where the true model contains a hierarchy of features at multiple scales.

  13. A Monte Carlo approach applied to ultrasonic non-destructive testing

    NASA Astrophysics Data System (ADS)

    Mosca, I.; Bilgili, F.; Meier, T.; Sigloch, K.

    2012-04-01

    Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and architectural structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to combine non-destructive testing with a theoretical data analysis and hence to contribute to conservation strategies of archaeological and architectural structures. We analyze ultrasonic waveforms measured at the surface of a variety of samples, and define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface waves in typical materials tested with a non-destructive technique (e.g., marble, unweathered and weathered concrete and natural stone).

  14. A Monte Carlo approach applied to ultrasonic non-destructive testing

    NASA Astrophysics Data System (ADS)

    Mosca, I.; Bilgili, F.; Meier, T. M.; Sigloch, K.

    2011-12-01

    Non-destructive testing based on ultrasound allows us to detect, characterize and size discrete flaws in geotechnical and engineering structures and materials. This information is needed to determine whether such flaws can be tolerated in future service. In typical ultrasonic experiments, only the first-arriving P-wave is interpreted, and the remainder of the recorded waveform is neglected. Our work aims at understanding surface waves, which are strong signals in the later wave train, with the ultimate goal of full waveform tomography. At present, even the structural estimation of layered media is still challenging because material properties of the samples can vary widely, and good initial models for inversion do not often exist. The aim of the present study is to analyze ultrasonic waveforms measured at the surface of Plexiglas and rock samples, and to define the behaviour of surface waves in structures of increasing complexity. The tremendous potential of ultrasonic surface waves becomes an advantage only if numerical forward modelling tools are available to describe the waveforms accurately. We compute synthetic full seismograms as well as group and phase velocities for the data. We invert them for the elastic properties of the sample via a global search of the parameter space, using the Neighbourhood Algorithm. Such a Monte Carlo approach allows us to perform a complete uncertainty and resolution analysis, but the computational cost is high and increases quickly with the number of model parameters. Therefore it is practical only for defining the seismic properties of media with a limited number of degrees of freedom, such as layered structures. We have applied this approach to both synthetic layered structures and real samples. The former contributed to benchmark the propagation of ultrasonic surface waves in typical materials tested with a non-destructive technique (e.g., marble, unweathered and weathered concrete and natural stone).

  15. Global seismic attenuation imaging using full-waveform inversion: a comparative assessment of different choices of misfit functionals

    NASA Astrophysics Data System (ADS)

    Karaoǧlu, Haydar; Romanowicz, Barbara

    2018-02-01

    We present the results of synthetic tests that aim at evaluating the relative performance of three different definitions of misfit functionals in the context of 3-D imaging of shear wave attenuation in the earth's upper mantle at the global scale, using long-period full-waveform data. The synthetic tests are conducted with simple hypothetical upper-mantle models that contain Qμ anomalies centred at different depths and locations, with or without additional seismic velocity anomalies. To build synthetic waveform data sets, we performed simulations of 50 events in the hypothetical (target) models, using the spectral element method, filtered in the period range 60-400 s. The selected events are chosen among 273 events used in the development of radially anisotropic model SEMUCB-WM1 and recorded at 495 stations worldwide. The synthetic Z-component waveforms correspond to paths and time intervals (fundamental mode and overtone Rayleigh waves) that exist in the real waveform data set. The inversions for shear attenuation structure are carried out using a Gauss-Newton optimization scheme in which the gradient and Hessian are computed using normal mode perturbation theory. The three different misfit functionals considered are based on time domain waveform (WF) and waveform envelope (E-WF) differences, as well as spectral amplitude ratios (SA), between observed and predicted waveforms. We evaluate the performance of the three misfit functional definitions in the presence of seismic noise and unresolved S-wave velocity heterogeneity and discuss the relative importance of physical dispersion effects due to 3-D Qμ structure. We observed that the performance of WF is poorer than the other two misfit functionals in recovering attenuation structure, unless anelastic dispersion effects are taken into account in the calculation of partial derivatives. WF also turns out to be more sensitive to seismic noise than E-WF and SA. Overall, SA performs best for attenuation imaging. Our tests show that it is important to account for 3-D elastic effects (focusing) before inverting for Qμ. Additionally, we show that including high signal-to-noise ratio overtone wave packets is necessary to resolve Qμ structure at depths greater than 250 km.

  16. Automatic microseismic event picking via unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang

    2018-01-01

    Effective and efficient arrival picking plays an important role in microseismic and earthquake data processing and imaging. Widely used short-term-average long-term-average ratio (STA/LTA) based arrival picking algorithms suffer from the sensitivity to moderate-to-strong random ambient noise. To make the state-of-the-art arrival picking approaches effective, microseismic data need to be first pre-processed, for example, removing sufficient amount of noise, and second analysed by arrival pickers. To conquer the noise issue in arrival picking for weak microseismic or earthquake event, I leverage the machine learning techniques to help recognizing seismic waveforms in microseismic or earthquake data. Because of the dependency of supervised machine learning algorithm on large volume of well-designed training data, I utilize an unsupervised machine learning algorithm to help cluster the time samples into two groups, that is, waveform points and non-waveform points. The fuzzy clustering algorithm has been demonstrated to be effective for such purpose. A group of synthetic, real microseismic and earthquake data sets with different levels of complexity show that the proposed method is much more robust than the state-of-the-art STA/LTA method in picking microseismic events, even in the case of moderately strong background noise.

  17. Signal Analysis Algorithms for Optimized Fitting of Nonresonant Laser Induced Thermal Acoustics Damped Sinusoids

    NASA Technical Reports Server (NTRS)

    Balla, R. Jeffrey; Miller, Corey A.

    2008-01-01

    This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.

  18. Algorithm theoretical basis for GEDI level-4A footprint above ground biomass density.

    NASA Astrophysics Data System (ADS)

    Kellner, J. R.; Armston, J.; Blair, J. B.; Duncanson, L.; Hancock, S.; Hofton, M. A.; Luthcke, S. B.; Marselis, S.; Tang, H.; Dubayah, R.

    2017-12-01

    The Global Ecosystem Dynamics Investigation is a NASA Earth-Venture-2 mission that will place a multi-beam waveform lidar instrument on the International Space Station. GEDI data will provide globally representative measurements of vertical height profiles (waveforms) and estimates of above ground carbon stocks throughout the planet's temperate and tropical regions. Here we describe the current algorithm theoretical basis for the L4A footprint above ground biomass data product. The L4A data product is above ground biomass density (AGBD, Mg · ha-1) at the scale of individual GEDI footprints (25 m diameter). Footprint AGBD is derived from statistical models that relate waveform height metrics to field-estimated above ground biomass. The field estimates are from long-term permanent plot inventories in which all free-standing woody plants greater than a diameter size threshold have been identified and mapped. We simulated GEDI waveforms from discrete-return airborne lidar data using the GEDI waveform simulator. We associated height metrics from simulated waveforms with field-estimated AGBD at 61 sites in temperate and tropical regions of North and South America, Europe, Africa, Asia and Australia. We evaluated the ability of empirical and physically-based regression and machine learning models to predict AGBD at the footprint level. Our analysis benchmarks the performance of these models in terms of site and region-specific accuracy and transferability using a globally comprehensive calibration and validation dataset.

  19. Automated Interval velocity picking for Atlantic Multi-Channel Seismic Data

    NASA Astrophysics Data System (ADS)

    Singh, Vishwajit

    2016-04-01

    This paper described the challenge in developing and testing a fully automated routine for measuring interval velocities from multi-channel seismic data. Various approaches are employed for generating an interactive algorithm picking interval velocity for continuous 1000-5000 normal moveout (NMO) corrected gather and replacing the interpreter's effort for manual picking the coherent reflections. The detailed steps and pitfalls for picking the interval velocities from seismic reflection time measurements are describe in these approaches. Key ingredients these approaches utilized for velocity analysis stage are semblance grid and starting model of interval velocity. Basin-Hopping optimization is employed for convergence of the misfit function toward local minima. SLiding-Overlapping Window (SLOW) algorithm are designed to mitigate the non-linearity and ill- possessedness of root-mean-square velocity. Synthetic data case studies addresses the performance of the velocity picker generating models perfectly fitting the semblance peaks. A similar linear relationship between average depth and reflection time for synthetic model and estimated models proposed picked interval velocities as the starting model for the full waveform inversion to project more accurate velocity structure of the subsurface. The challenges can be categorized as (1) building accurate starting model for projecting more accurate velocity structure of the subsurface, (2) improving the computational cost of algorithm by pre-calculating semblance grid to make auto picking more feasible.

  20. Development and Validation of a Machine Learning Algorithm and Hybrid System to Predict the Need for Life-Saving Interventions in Trauma Patients

    DTIC Science & Technology

    2014-01-01

    were stored at a rate of 1 Hz. In addition, ECg waveform data from a single lead and pleth waveform data from a thumb-mounted pulse oximeter to the...blood oxygenation (SpO2). Combinations of these vital signs were also used to derive other measurements including shock index (SI = Hr/SBP) and pulse ...combining all vital signs, trends, and pulse characteristics recorded by the monitor, and apply- ing a multivariate sensor fusion algorithm that generates

  1. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel

    2014-07-01

    We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a surrogate. As waveform generation is one of the dominant costs in parameter estimation algorithms and parameter space exploration, surrogate models offer a new and practical way to dramatically accelerate such studies without impacting accuracy. Surrogates built in this paper, as well as others, are available from GWSurrogate, a publicly available python package.

  2. MODELING WAVE FORM EFFECTS IN ESPS: THE ALGORITHM IN ESPM AND ESPVI

    EPA Science Inventory

    The paper details the ways in which waveform effects in electrostatic precipitators (ESPs) are modeled. he effects of waveforms on particle charging, space charge corona suppression, and sparking are examined. he paper shows how the models extend these results to the case of inte...

  3. A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352

    2015-09-01

    In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less

  4. Source mechanism analysis of central Aceh earthquake July 2, 2013 Mw 6.2 using moment tensor inversion with BMKG waveform data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasetyo, Retno Agung, E-mail: prasetyo.agung@bmkg.go.id; Heryandoko, Nova; Afnimar

    The source mechanism of earthquake on July 2, 2013 was investigated by using moment tensor inversion. The result also compared by the field observation. Five waveform data of BMKG’s seismic network used to estimate the mechanism of earthquake, namely ; KCSI, MLSI, LASI, TPTI and SNSI. Main shock data taken during 200 seconds and filtered by using Butterworth band pass method from 0.03 to 0.05 Hz of frequency. Moment tensor inversion method is applied based on the point source assumption. Furthermore, the Green function calculated using the extended reflectivity method which modified by Kohketsu. The inversion result showed a strike-slipmore » faulting, where the nodal plane strike/dip/rake (124/80.6/152.8) and minimum variance value 0.3285 at a depth of 6 km (centroid). It categorized as a shallow earthquake. Field observation indicated that the building orientated to the east. It can be related to the southwest of dip direction which has 152 degrees of slip. As conclusion, the Pressure (P) and Tension (T) axis described dominant compression is happen from the south which is caused by pressure of the Indo-Australian plate.« less

  5. Frequency-domain ultrasound waveform tomography breast attenuation imaging

    NASA Astrophysics Data System (ADS)

    Sandhu, Gursharan Yash Singh; Li, Cuiping; Roy, Olivier; West, Erik; Montgomery, Katelyn; Boone, Michael; Duric, Neb

    2016-04-01

    Ultrasound waveform tomography techniques have shown promising results for the visualization and characterization of breast disease. By using frequency-domain waveform tomography techniques and a gradient descent algorithm, we have previously reconstructed the sound speed distributions of breasts of varying densities with different types of breast disease including benign and malignant lesions. By allowing the sound speed to have an imaginary component, we can model the intrinsic attenuation of a medium. We can similarly recover the imaginary component of the velocity and thus the attenuation. In this paper, we will briefly review ultrasound waveform tomography techniques, discuss attenuation and its relations to the imaginary component of the sound speed, and provide both numerical and ex vivo examples of waveform tomography attenuation reconstructions.

  6. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

  7. Resolvability of regional density structure and the road to direct density inversion - a principal-component approach to resolution analysis

    NASA Astrophysics Data System (ADS)

    Płonka, Agnieszka; Fichtner, Andreas

    2017-04-01

    Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convective motion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravity provide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling, making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assess if 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within the crust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we perform principal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish the extent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrained independently. We apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density, potentially allowing for as independent as possible density resolution. We find that surface (mosty Rayleigh) waves have significant sensitivity to density, and that the trade-off with velocity is negligible. We also show the preliminary results of the inversion.

  8. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2016-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.

  9. Nature and Role of Subducting Sediments on the Megathrust and Forearc Evolution in the 2004 Great Sumatra Earthquake Rupture Zone: Results from Full Waveform Inversion of Long Offset Seismic Data

    NASA Astrophysics Data System (ADS)

    Singh, S. C.; Qin, Y.

    2015-12-01

    On active accretionary margins, the nature of incoming sediments defines the locking mechanism on the megathrust, and the development and evolution of the accretionary wedge. Drilling is the most direct method to characterise the nature of these sediments, but the drilling is very expensive, and provide information at only a few locations. In north Sumatra, an IODP drilling is programmed to take place in July-August 2016. We have performed seismic full waveform inversion of 12 km long offset seismic reflection data acquired by WesternGeco in 2006 over a 35 km zone near the subduction front in the 2004 earthquake rupture zone area that provide detailed quantitative information on the characteristics of the incoming sediments. We first downward continue the surface streamer data to the seafloor, which removes the effect of deep water (~5 km) and brings out the refraction arrivals as the first arrivals. We carry out travel time tomography, and then performed full waveform inversion of seismic refraction data followed by the full waveform inversion of reflection data providing detailed (10-20 m) velocity structure. The sediments in this area are 3-5 km thick where the P-wave velocity increases from 1.6 km/s near the seafloor to more than 4.5 km/s above the oceanic crust. The high velocity of sediments above the basement suggests that the sediments are highly compacted, strengthened the coupling near the subduction front, which might have been responsible for 2004 earthquake rupture propagation up to the subduction front, enhancing the tsunami. We also find several thin velocity layers within the sediments, which might be due to high pore-pressure fluid or free gas. These layers might be responsible for the formation of pseudo-decollement within the forearc sediments that acts as a conveyer belt between highly compacted subducting lower sediments and accreted sediments above. The presence of well intact sediments on the accretionary prism supports this interpretation. Our results provide first hand information about the sediments properties, which will be ground toothed by drilling.

  10. Final Project Report: Imaging Fault Zones Using a Novel Elastic Reverse-Time Migration Imaging Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie; Chen, Ting; Tan, Sirui

    Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismicmore » data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.« less

  11. Broadband Ground Motion Synthesis of the 1999 Turkey Earthquakes Based On: 3-D Velocity Inversion, Finite Difference Calculations and Emprical Greens Functions

    NASA Astrophysics Data System (ADS)

    Gok, R.; Kalafat, D.; Hutchings, L.

    2003-12-01

    We analyze over 3,500 aftershocks recorded by several seismic networks during the 1999 Marmara, Turkey earthquakes. The analysis provides source parameters of the aftershocks, a three-dimensional velocity structure from tomographic inversion, an input three-dimensional velocity model for a finite difference wave propagation code (E3D, Larsen 1998), and records available for use as empirical Green's functions. Ultimately our goal is to model the 1999 earthquakes from DC to 25 Hz and study fault rupture mechanics and kinematic rupture models. We performed the simultaneous inversion for hypocenter locations and three-dimensional P- and S- wave velocity structure of Marmara Region using SIMULPS14 along with 2,500 events with more than eight P- readings and an azimuthal gap of less than 180\\deg. The resolution of calculated velocity structure is better in the eastern Marmara than the western Marmara region due to the dense ray coverage. We used the obtained velocity structure as input into the finite difference algorithm and validated the model by using M < 4 earthquakes as point sources and matching long period waveforms (f < 0.5 Hz). We also obtained Mo, fc and individual station kappa values for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquakes (M < 4.0) to obtain empirical Green's function (EGF) for the higher frequency range of ground motion synthesis (0.5 < f > 25 Hz). We additionally obtained the source scaling relation (energy-moment) of these aftershocks. We have generated several scenarios constrained by a priori knowledge of the Izmit and Duzce rupture parameters to validate our prediction capability.

  12. Rapid earthquake detection through GPU-Based template matching

    NASA Astrophysics Data System (ADS)

    Mu, Dawei; Lee, En-Jui; Chen, Po

    2017-12-01

    The template-matching algorithm (TMA) has been widely adopted for improving the reliability of earthquake detection. The TMA is based on calculating the normalized cross-correlation coefficient (NCC) between a collection of selected template waveforms and the continuous waveform recordings of seismic instruments. In realistic applications, the computational cost of the TMA is much higher than that of traditional techniques. In this study, we provide an analysis of the TMA and show how the GPU architecture provides an almost ideal environment for accelerating the TMA and NCC-based pattern recognition algorithms in general. So far, our best-performing GPU code has achieved a speedup factor of more than 800 with respect to a common sequential CPU code. We demonstrate the performance of our GPU code using seismic waveform recordings from the ML 6.6 Meinong earthquake sequence in Taiwan.

  13. Interpretaion of synthetic seismic time-lapse monitoring data for Korea CCS project based on the acoustic-elastic coupled inversion

    NASA Astrophysics Data System (ADS)

    Oh, J.; Min, D.; Kim, W.; Huh, C.; Kang, S.

    2012-12-01

    Recently, the CCS (Carbon Capture and Storage) is one of the promising methods to reduce the CO2 emission. To evaluate the success of the CCS project, various geophysical monitoring techniques have been applied. Among them, the time-lapse seismic monitoring is one of the effective methods to investigate the migration of CO2 plume. To monitor the injected CO2 plume accurately, it is needed to interpret seismic monitoring data using not only the imaging technique but also the full waveform inversion, because subsurface material properties can be estimated through the inversion. However, previous works for interpreting seismic monitoring data are mainly based on the imaging technique. In this study, we perform the frequency-domain full waveform inversion for synthetic data obtained by the acoustic-elastic coupled modeling for the geological model made after Ulleung Basin, which is one of the CO2 storage prospects in Korea. We suppose the injection layer is located in fault-related anticlines in the Dolgorae Deformed Belt and, for more realistic situation, we contaminate the synthetic monitoring data with random noise and outliers. We perform the time-lapse full waveform inversion in two scenarios. One scenario is that the injected CO2 plume migrates within the injection layer and is stably captured. The other scenario is that the injected CO2 plume leaks through the weak part of the cap rock. Using the inverted P- and S-wave velocities and Poisson's ratio, we were able to detect the migration of the injected CO2 plume. Acknowledgment This work was financially supported by the Brain Korea 21 project of Energy Systems Engineering, the "Development of Technology for CO2 Marine Geological Storage" program funded by the Ministry of Land, Transport and Maritime Affairs (MLTM) of Korea and the Korea CCS R&D Center (KCRC) grant funded by the Korea government (Ministry of Education, Science and Technology) (No. 2012-0008926).

  14. Advancements and challenges in crosshole GPR full-waveform inversion for hydrological applications

    NASA Astrophysics Data System (ADS)

    Klotzsche, A.; Van Der Kruk, J.; Vereecken, H.

    2016-12-01

    Crosshole ground penetrating radar (GPR) full-waveform inversion (FWI) demonstrated over the last decade a high potential to detect, map, and resolve decimeter-small-scale structures within aquifers. GPR FWI uses Maxwell's equations to find a model that fits the measurements with the entire measured waveform. One big advantage is that by applying one method, we can derive two soil properties: dielectric permittivity and electrical conductivity. Both parameters are sensitive to different soil properties such as soil water content and porosity, or, clay content. Hence, an improved characterization of the critical zone is possible. The application of the FWI to aquifers in Germany, Switzerland, Denmark, and USA showed for all sites improved and higher resolution images than standard ray-based methods and provided new insights in the aquifers' structures. Furthermore, small-scale high contrast layers caused by changes in porosity were characterize and enhanced our understanding of the electromagnetic wave propagation related to these features. However, to obtain reliable and accurate inversion results from experimental data and hence porosity estimates, many detailed steps in acquiring the data, pre-processing and inverting the data need to be carefully followed. Here, we provide an overview of recent developments and advancements of the 2D crosshole GPR FWI that provide improved inversion results for permittivity and electrical conductivity. In addition, we will provide guidelines and point out important challenges and pitfalls that can occur during the inversion of experimental data. We will illustrate the necessary steps that are required to achieve reliable FWI results, which are indicated by e.g. a good fit of the measured and modelled traces, and, absence of a remaining gradient for the final models. Important requirements for a successful application are an accurate time zero correction, good starting models for the FWI, and, a well-estimated source wavelet.

  15. Novel scheme for rapid parallel parameter estimation of gravitational waves from compact binary coalescences

    NASA Astrophysics Data System (ADS)

    Pankow, C.; Brady, P.; Ochsner, E.; O'Shaughnessy, R.

    2015-07-01

    We introduce a highly parallelizable architecture for estimating parameters of compact binary coalescence using gravitational-wave data and waveform models. Using a spherical harmonic mode decomposition, the waveform is expressed as a sum over modes that depend on the intrinsic parameters (e.g., masses) with coefficients that depend on the observer dependent extrinsic parameters (e.g., distance, sky position). The data is then prefiltered against those modes, at fixed intrinsic parameters, enabling efficiently evaluation of the likelihood for generic source positions and orientations, independent of waveform length or generation time. We efficiently parallelize our intrinsic space calculation by integrating over all extrinsic parameters using a Monte Carlo integration strategy. Since the waveform generation and prefiltering happens only once, the cost of integration dominates the procedure. Also, we operate hierarchically, using information from existing gravitational-wave searches to identify the regions of parameter space to emphasize in our sampling. As proof of concept and verification of the result, we have implemented this algorithm using standard time-domain waveforms, processing each event in less than one hour on recent computing hardware. For most events we evaluate the marginalized likelihood (evidence) with statistical errors of ≲5 %, and even smaller in many cases. With a bounded runtime independent of the waveform model starting frequency, a nearly unchanged strategy could estimate neutron star (NS)-NS parameters in the 2018 advanced LIGO era. Our algorithm is usable with any noise curve and existing time-domain model at any mass, including some waveforms which are computationally costly to evolve.

  16. On the application of neural networks to the classification of phase modulated waveforms

    NASA Astrophysics Data System (ADS)

    Buchenroth, Anthony; Yim, Joong Gon; Nowak, Michael; Chakravarthy, Vasu

    2017-04-01

    Accurate classification of phase modulated radar waveforms is a well-known problem in spectrum sensing. Identification of such waveforms aids situational awareness enabling radar and communications spectrum sharing. While various feature extraction and engineering approaches have sought to address this problem, the use of a machine learning algorithm that best utilizes these features is becomes foremost. In this effort, a comparison of a standard shallow and a deep learning approach are explored. Experiments provide insights into classifier architecture, training procedure, and performance.

  17. Using the Auditory Hazard Assessment Algorithm for Humans (AHAAH) With Hearing Protection Software, Release MIL-STD-1474E

    DTIC Science & Technology

    2013-12-01

    points in the waveform. This is useful if the digitization rate is unnecessarily high and the waveform content remains unchanged at lower sampling...there is a precursor acoustic event not included in the waveform, like another impulse or high background noise. MIL-STD-1474E defines an exposure as...Breaking strain of annular ligament filaments Ramp 6 unitless ratio Ratio of resistance to stiffness of annular ligament at high loads So 1.00E+09

  18. Effects of Conjugate Gradient Methods and Step-Length Formulas on the Multiscale Full Waveform Inversion in Time Domain: Numerical Experiments

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing

    2017-05-01

    We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the data are contaminated by noise, the objective function values of the Direct and Interp are oscillating at the beginning of the inversion, whereas that of the Search decreases consistently.

  19. The Crust and Upper Mantle Structure of the Iranian Plateau from Joint Waveform Tomography Imaging of Body and Surface Waves

    NASA Astrophysics Data System (ADS)

    Roecker, S. W.; Priestley, K. F.; Tatar, M.

    2014-12-01

    The Iranian Plateau forms a broad zone of deformation between the colliding Arabian and Eurasian plates. The convergence is accommodated in the Zagros Mountains of SW Iran, the Alborz Mountains of northern Iran, and the Kopeh Dagh Mountains of NE Iran. These deforming belts are separated by relatively aseismic depressions such as the Lut Block. It has been suggested that the Arabia-Eurasia collision is similar to the Indo-Eurasia collision but at a early point of development and therefore, it may provide clues to our understanding of the earlier stages of the continent-continent collision process. We present results of the analysis of seismic data collected along two NE-SW trending transects across the Iranian Plateau. The first profile extends from near Bushere on the Persian Gulf coast to near to the Iran-Turkmenistan border north of Mashad, and consists of seismic recordings along the SW portion of the line in 2000-2001 and recording along the NE portion of the line in 2003 and 2006-2008. The second profile extends from near the Iran-Iraq border near the Dezfel embayment to the south Caspian Sea coast north of Tehran. We apply the combined 2.5D finite element waveform tomography algorithm of Baker and Roecker [2014] to jointly invert teleseismic body and surface waves to determine the elastic wavespeed structures of these areas. The joint inversion of these different types of waves affords similar types of advantages that are common to combined surface wave dispersion/receiver function inversions in compensating for intrinsic weaknesses in horizontal and vertical resolution capabilities. We compare results recovered from a finite difference approach to document the effects of various assumptions related to their application, such as the inclusion of topography, on the models recovered. We also apply several different inverse methods, starting with simple gradient techniques to the more sophisticated pseudo-Hessian or L-BFGS approach, and find that the latter are generally more robust. Modeling of receiver functions and surface wave dispersion prior to the analysis is shown to be an efficacious way to generate starting models for this analysis.

  20. Evaluation of an experimental LiDAR for surveying a shallow, braided, sand-bedded river

    USGS Publications Warehouse

    Kinzel, P.J.; Wright, C.W.; Nelson, J.M.; Burman, A.R.

    2007-01-01

    Reaches of a shallow (<1.0m), braided, sand-bedded river were surveyed in 2002 and 2005 with the National Aeronautics and Space Administration's Experimental Advanced Airborne Research LiDAR (EAARL) and concurrently with conventional survey-grade, real-time kinematic, global positioning system technology. The laser pulses transmitted by the EAARL instrument and the return backscatter waveforms from exposed sand and submerged sand targets in the river were completely digitized and stored for postflight processing. The vertical mapping accuracy of the EAARL was evaluated by comparing the ellipsoidal heights computed from ranging measurements made using an EAARL terrestrial algorithm to nearby (<0.5m apart) ground-truth ellipsoidal heights. After correcting for apparent systematic bias in the surveys, the root mean square error of these heights with the terrestrial algorithm in the 2002 survey was 0.11m for the 26 measurements taken on exposed sand and 0.18m for the 59 measurements taken on submerged sand. In the 2005 survey, the root mean square error was 0.18m for 92 measurements taken on exposed sand and 0.24m for 434 measurements on submerged sand. In submerged areas the waveforms were complicated by reflections from the surface, water column entrained turbidity, and potentially the riverbed. When applied to these waveforms, especially in depths greater than 0.4m, the terrestrial algorithm calculated the range above the riverbed. A bathymetric algorithm has been developed to approximate the position of the riverbed in these convolved waveforms and preliminary results are encouraging. ?? 2007 ASCE.

  1. Time-domain induced polarization - an analysis of Cole-Cole parameter resolution and correlation using Markov Chain Monte Carlo inversion

    NASA Astrophysics Data System (ADS)

    Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest

    2017-12-01

    The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.

  2. Seismic waveform sensitivity to global boundary topography

    NASA Astrophysics Data System (ADS)

    Colombi, Andrea; Nissen-Meyer, Tarje; Boschi, Lapo; Giardini, Domenico

    2012-09-01

    We investigate the implications of lateral variations in the topography of global seismic discontinuities, in the framework of high-resolution forward modelling and seismic imaging. We run 3-D wave-propagation simulations accurate at periods of 10 s and longer, with Earth models including core-mantle boundary topography anomalies of ˜1000 km spatial wavelength and up to 10 km height. We obtain very different waveform signatures for PcP (reflected) and Pdiff (diffracted) phases, supporting the theoretical expectation that the latter are sensitive primarily to large-scale structure, whereas the former only to small scale, where large and small are relative to the frequency. PcP at 10 s seems to be well suited to map such a small-scale perturbation, whereas Pdiff at the same frequency carries faint signatures that do not allow any tomographic reconstruction. Only at higher frequency, the signature becomes stronger. We present a new algorithm to compute sensitivity kernels relating seismic traveltimes (measured by cross-correlation of observed and theoretical seismograms) to the topography of seismic discontinuities at any depth in the Earth using full 3-D wave propagation. Calculation of accurate finite-frequency sensitivity kernels is notoriously expensive, but we reduce computational costs drastically by limiting ourselves to spherically symmetric reference models, and exploiting the axial symmetry of the resulting propagating wavefield that collapses to a 2-D numerical domain. We compute and analyse a suite of kernels for upper and lower mantle discontinuities that can be used for finite-frequency waveform inversion. The PcP and Pdiff sensitivity footprints are in good agreement with the result obtained cross-correlating perturbed and unperturbed seismogram, validating our approach against full 3-D modelling to invert for such structures.

  3. Quantification of thickness loss in a liquid-loaded plate using ultrasonic guided wave tomography

    NASA Astrophysics Data System (ADS)

    Rao, Jing; Ratassepp, Madis; Fan, Zheng

    2017-12-01

    Ultrasonic guided wave tomography (GWT) provides an attractive solution to map thickness changes from remote locations. It is based on the velocity-to-thickness mapping employing the dispersive characteristics of selected guided modes. This study extends the application of GWT on a liquid-loaded plate. It is a more challenging case than the application on a free plate, due to energy of the guided waves leaking into the liquid. In order to ensure the accuracy of thickness reconstruction, advanced forward models are developed to consider attenuation effects using complex velocities. The reconstruction of the thickness map is based on the frequency-domain full waveform inversion (FWI) method, and its accuracy is discussed using different frequencies and defect dimensions. Validation experiments are carried out on a water-loaded plate with an irregularly shaped defect using S0 guided waves, showing excellent performance of the reconstruction algorithm.

  4. Sorting signed permutations by inversions in O(nlogn) time.

    PubMed

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  5. Automated rapid finite fault inversion for megathrust earthquakes: Application to the Maule (2010), Iquique (2014) and Illapel (2015) great earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2016-04-01

    Rapid estimation of the spatial and temporal rupture characteristics of large megathrust earthquakes by finite fault inversion is important for disaster mitigation. For example, estimates of the spatio-temporal evolution of rupture can be used to evaluate population exposure to tsunami waves and ground shaking soon after the event by providing more accurate predictions than possible with point source approximations. In addition, rapid inversion results can reveal seismic source complexity to guide additional, more detailed subsequent studies. This work develops a method to rapidly estimate the slip distribution of megathrust events while reducing subjective parameter choices by automation. The method is simple yet robust and we show that it provides excellent preliminary rupture models as soon as 30 minutes for three great earthquakes in the South-American subduction zone. This may slightly change for other regions depending on seismic station coverage but method can be applied to any subduction region. The inversion is based on W-phase data since it is rapidly and widely available and of low amplitude which avoids clipping at close stations for large events. In addition, prior knowledge of the slab geometry (e.g. SLAB 1.0) is applied and rapid W-phase point source information (time delay and centroid location) is used to constrain the fault geometry and extent. Since the linearization by multiple time window (MTW) parametrization requires regularization, objective smoothing is achieved by the discrepancy principle in two fully automated steps. First, the residuals are estimated assuming unknown noise levels, and second, seeking a subsequent solution which fits the data to noise level. The MTW scheme is applied with positivity constraints and a solution is obtained by an efficient non-negative least squares solver. Systematic application of the algorithm to the Maule (2010), Iquique (2014) and Illapel (2015) events illustrates that rapid finite fault inversion with teleseismic data is feasible and provides meaningful results. The results for the three events show excellent data fits and are consistent with other solutions showing most of the slip occurring close to the trench for the Maule an Illapel events and some deeper slip for the Iquique event. Importantly, the Illapel source model predicts tsunami waveforms of close agreement with observed waveforms. Finally, we develop a new Bayesian approach to approximate uncertainties as part of the rapid inversion scheme with positivity constraints. Uncertainties are estimated by approximating the posterior distribution as a multivariate log-normal distribution. While solving for the posterior adds some additional computational cost, we illustrate that uncertainty estimation is important for meaningful interpretation of finite fault models.

  6. Automated system for analyzing the activity of individual neurons

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Johnson, Kenneth O.; Menkes, Alex M.; Diamond, Steve D.; Oshaughnessy, David M.

    1993-01-01

    This paper presents a signal processing system that: (1) provides an efficient and reliable instrument for investigating the activity of neuronal assemblies in the brain; and (2) demonstrates the feasibility of generating the command signals of prostheses using the activity of relevant neurons in disabled subjects. The system operates online, in a fully automated manner and can recognize the transient waveforms of several neurons in extracellular neurophysiological recordings. Optimal algorithms for detection, classification, and resolution of overlapping waveforms are developed and evaluated. Full automation is made possible by an algorithm that can set appropriate decision thresholds and an algorithm that can generate templates on-line. The system is implemented with a fast IBM PC compatible processor board that allows on-line operation.

  7. Decimetric-resolution stochastic inversion of shallow marine seismic reflection data; dedicated strategy and application to a geohazard case study

    NASA Astrophysics Data System (ADS)

    Provenzano, Giuseppe; Vardy, Mark E.; Henstock, Timothy J.

    2018-06-01

    Characterisation of the top 10-50 m of the subseabed is key for landslide hazard assessment, offshore structure engineering design and underground gas-storage monitoring. In this paper, we present a methodology for the stochastic inversion of ultra-high-frequency (UHF, 0.2-4.0 kHz) pre-stack seismic reflection waveforms, designed to obtain a decimetric-resolution remote elastic characterisation of the shallow sediments with minimal pre-processing and little a-priori information. We use a genetic algorithm in which the space of possible solutions is sampled by explicitly decoupling the short and long wavelengths of the P-wave velocity model. This approach, combined with an objective function robust to cycle skipping, outperforms a conventional model parametrisation when the ground-truth is offset from the centre of the search domain. The robust P-wave velocity model is used to precondition the width of the search range of the multi-parameter elastic inversion, thereby improving the efficiency in high dimensional parametrizations. Multiple independent runs provide a set of independent results from which the reproducibility of the solution can be estimated. In a real dataset acquired in Finneidfjord, Norway, we also demonstrate the sensitivity of UHF seismic inversion to shallow subseabed anomalies that play a role in submarine slope stability. Thus, the methodology has the potential to become an important practical tool for marine ground model building in spatially heterogeneous areas, reducing the reliance on expensive and time-consuming coring campaigns for geohazard mitigation in marine areas.

  8. Slip history and dynamic implications of the 1999 Chi-Chi, Taiwan, earthquake

    USGS Publications Warehouse

    Ji, C.; Helmberger, D.V.; Wald, D.J.; Ma, K.-F.

    2003-01-01

    We investigate the rupture process of the 1999 Chi-Chi, Taiwan, earthquake using extensive near-source observations, including three-component velocity waveforms at 36 strong motion stations and 119 GPS measurements. A three-plane fault geometry derived from our previous inversion using only static data [Ji et al., 2001] is applied. The slip amplitude, rake angle, rupture initiation time, and risetime function are inverted simultaneously with a recently developed finite fault inverse method that combines a wavelet transform approach with a simulated annealing algorithm [Ji et al., 2002b]. The inversion results are validated by the forward prediction of an independent data set, the teleseismic P and SH ground velocities, with notable agreement. The results show that the total seismic moment release of this earthquake is 2.7 ?? 1020 N m and that most of the slip occured in a triangular-shaped asperity involving two fault segments, which is consistent with our previous static inversion. The rupture front propagates with an average rupture velocity of ???2.0 km s-1, and the average slip duration (risetime) is 7.2 s. Several interesting observations related to the temporal evolution of the Chi-Chi earthquake are also investigated, including (1) the strong effect of the sinuous fault plane of the Chelungpu fault on spatial and temporal variations in slip history, (2) the intersection of fault 1 and fault 2 not being a strong impediment to the rupture propagation, and (3 the observation that the peak slip velocity near the surface is, in general, higher than on the deeper portion of the fault plane, as predicted by dynamic modeling.

  9. ASKI: A modular toolbox for scattering-integral-based seismic full waveform inversion and sensitivity analysis utilizing external forward codes

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).

  10. Joint Inversion of 1-Hz GPS Data and Strong Motion Records for the Rupture Process of the 2008 Iwate-Miyagi Nairiku Earthquake: Objectively Determining Relative Weighting

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Kato, T.; Wang, Y.

    2015-12-01

    The spatiotemporal fault slip history of the 2008 Iwate-Miyagi Nairiku earthquake, Japan, is obtained by the joint inversion of 1-Hz GPS waveforms and near-field strong motion records. 1-Hz GPS data from GEONET is processed by GAMIT/GLOBK and then a low-pass filter of 0.05 Hz is applied. The ground surface strong motion records from stations of K-NET and Kik-Net are band-pass filtered for the range of 0.05 ~ 0.3 Hz and integrated once to obtain velocity. The joint inversion exploits a broader frequency band for near-field ground motions, which provides excellent constraints for both the detailed slip history and slip distribution. A fully Bayesian inversion method is performed to simultaneously and objectively determine the rupture model, the unknown relative weighting of multiple data sets and the unknown smoothing hyperparameters. The preferred rupture model is stable for different choices of velocity structure model and station distribution, with maximum slip of ~ 8.0 m and seismic moment of 2.9 × 1019 Nm (Mw 6.9). By comparison with the single inversion of strong motion records, the cumulative slip distribution of joint inversion shows sparser slip distribution with two slip asperities. One common slip asperity extends from the hypocenter southeastward to the ground surface of breakage; another slip asperity, which is unique for joint inversion contributed by 1-Hz GPS waveforms, appears in the deep part of fault where very few aftershocks are occurring. The differential moment rate function of joint and single inversions obviously indicates that rich high frequency waves are radiated in the first three seconds but few low frequency waves.

  11. Resolving the detailed spatiotemporal slip evolution of deep tremor in western Japan

    NASA Astrophysics Data System (ADS)

    Ohta, K.; Ide, S.

    2017-12-01

    A quantitative evaluation of the slip evolution of tremor is essential to understand the generation mechanism of slow earthquakes. The recent studies have revealed the most part of tremor signals can be expressed as the superposition of low frequency earthquakes (LFE). However, it is still challenging to explain the entire waveforms of tremor, because a conventional slip inversion analysis is not available for tremor due to insufficient knowledge of source locations and Green's functions. Here we investigate the detailed spatiotemporal behavior of deep tremor in western Japan through the development and application of a new slip inversion method. We introduce synthetic template waveforms, which are typical tremor waveforms obtained by stacking LFE seismograms at arranged points along the plate interface. Using these synthetic template waveforms as substitutes for Green's functions, we invert the continuous tremor waveforms using an iterative deconvolution approach with Bayesian constraints. We apply this method to two tremor burst episodes in western and central Shikoku, Japan. The estimated slip distribution from a 12-day tremor burst episode in western Shikoku is heterogeneous, with several patchy areas of slip along the plate interface where rapid moment releases with durations of <100 s regularly occur. We attribute these heterogeneous spatiotemporal slip patterns to heterogeneous material properties along the plate interface. For central Shikoku, where we focus on a tremor burst episode that occurred coincidentally with a very low frequency earthquake (VLF), we observe that the source size of the VLF is much larger than that estimated from tremor activity in western Shikoku. These differences in the size of the slip region may dictate the visibility of VLF signals in observed seismograms, which has implications for the mechanics of slow earthquakes and subduction zone processes.

  12. Robust automated classification of first-motion polarities for focal mechanism determination with machine learning

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Meier, M. A.; Hauksson, E.

    2017-12-01

    Accurate first-motion polarities are essential for determining earthquake focal mechanisms, but are difficult to measure automatically because of picking errors and signal to noise issues. Here we develop an algorithm for reliable automated classification of first-motion polarities using machine learning algorithms. A classifier is designed to identify whether the first-motion polarity is up, down, or undefined by examining the waveform data directly. We first improve the accuracy of automatic P-wave onset picks by maximizing a weighted signal/noise ratio for a suite of candidate picks around the automatic pick. We then use the waveform amplitudes before and after the optimized pick as features for the classification. We demonstrate the method's potential by training and testing the classifier on tens of thousands of hand-made first-motion picks by the Southern California Seismic Network. The classifier assigned the same polarity as chosen by an analyst in more than 94% of the records. We show that the method is generalizable to a variety of learning algorithms, including neural networks and random forest classifiers. The method is suitable for automated processing of large seismic waveform datasets, and can potentially be used in real-time applications, e.g. for improving the source characterizations of earthquake early warning algorithms.

  13. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  14. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  15. Numerical method for computing Maass cusp forms on triply punctured two-sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, K. T.; Kamari, H. M.; Zainuddin, H.

    2014-03-05

    A quantum mechanical system on a punctured surface modeled on hyperbolic space has always been an important subject of research in mathematics and physics. This corresponding quantum system is governed by the Schrödinger equation whose solutions are the Maass waveforms. Spectral studies on these Maass waveforms are known to contain both continuous and discrete eigenvalues. The discrete eigenfunctions are usually called the Maass Cusp Forms (MCF) where their discrete eigenvalues are not known analytically. We introduce a numerical method based on Hejhal and Then algorithm using GridMathematica for computing MCF on a punctured surface with three cusps namely the triplymore » punctured two-sphere. We also report on a pullback algorithm for the punctured surface and a point locater algorithm to facilitate the complete pullback which are essential parts of the main algorithm.« less

  16. W17_geowave “3D full waveform geophysical models”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Maceira, Monica; Roy, Corinna

    2018-02-12

    Performance of the MCMC inversion according to the number of cores for the computation. A) 64 cores. B) 480 cores. C) 816 cores. The true model is represented by the black line. Vsv is the wave speed of S waves polarized in the vertical plane, ξ is an anisotropy parameter. The Earth is highly anisotropics; the wavespeed of seismic waves depends on the polarization of the wave. Seismic inversion of the elastic structure is usually limited to isotropic information such as Vsv. Our research looked at the inversion of Earth anisotropy.

  17. Strategies to Enhance the Model Update in Regions of Weak Sensitivities for Use in Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Nuber, André; Manukyan, Edgar; Maurer, Hansruedi

    2014-05-01

    Conventional methods of interpreting seismic data rely on filtering and processing limited portions of the recorded wavefield. Typically, either reflections, refractions or surface waves are considered in isolation. Particularly in near-surface engineering and environmental investigations (depths less than, say 100 m), these wave types often overlap in time and are difficult to separate. Full waveform inversion is a technique that seeks to exploit and interpret the full information content of the seismic records without the need for separating events first; it yields models of the subsurface at sub-wavelength resolution. We use a finite element modelling code to solve the 2D elastic isotropic wave equation in the frequency domain. This code is part of a Gauss-Newton inversion scheme which we employ to invert for the P- and S-wave velocities as well as for density in the subsurface. For shallow surface data the use of an elastic forward solver is essential because surface waves often dominate the seismograms. This leads to high sensitivities (partial derivatives contained in the Jacobian matrix of the Gauss-Newton inversion scheme) and thus large model updates close to the surface. Reflections from deeper structures may also include useful information, but the large sensitivities of the surface waves often preclude this information from being fully exploited. We have developed two methods that balance the sensitivity distributions and thus may help resolve the deeper structures. The first method includes equilibrating the columns of the Jacobian matrix prior to every inversion step by multiplying them with individual scaling factors. This is expected to also balance the model updates throughout the entire subsurface model. It can be shown that this procedure is mathematically equivalent to balancing the regularization weights of the individual model parameters. A proper choice of the scaling factors required to balance the Jacobian matrix is critical. We decided to normalise the columns of the Jacobian based on their absolute column sum, but defining an upper threshold for the scaling factors. This avoids particularly small and therefore insignificant sensitivities being over-boosted, which would produce unstable results. The second method proposed includes adjusting the inversion cell size with depth. Multiple cells of the forward modelling grid are merged to form larger inversion cells (typical ratios between forward and inversion cells are in the order of 1:100). The irregular inversion grid is adapted to the expected resolution power of full waveform inversion. Besides stabilizing the inversion, this approach also reduces the number of model parameters to be recovered. Consequently, the computational costs and the memory consumption are reduced significantly. This is particularly critical when Gauss-Newton type inversion schemes are employed. Extensive tests with synthetic data demonstrated that both methods stabilise the inversion and improve the inversion results. The two methods have some redundancy, which can be seen when both are applied simultaneously, that is, when scaling of the Jacobian matrix is applied to an irregular inversion grid. The calculated scaling factors are quite balanced and span a much smaller range than in the case of a regular inversion grid.

  18. Improving the Accuracy of Coastal Sea Surface Heights by Retracking Decontaminated Radar Altimetry Waveforms

    NASA Astrophysics Data System (ADS)

    Huang, Zhengkai; Wang, Haihong; Luo, Zhicai

    2017-04-01

    Due to the complex coastal topography and energetic ocean dynamics effect, the return echoes are contaminated while the satellite footprint approaches or leaves the coastline. Specular peaks are often induced in the trailing edges of contaminated waveforms, thus leading the error in the determination of the leading edge and associated track offset in the waveform retracking process. We propose an improved algorithm base on Tseng's modification method to decontaminated coastal (0-7 km from coastline) waveforms, thus improving both the utilization and precision of coastal sea surface height (SSH). Using the Envisat/Jason-2 SGDR data, the shortcoming of Tseng's method is pointed out and the novel algorithm is proposed by revising the strategy of selecting reference waveform and determining weight for removing outlier. The reference waveform of the decontaminated technology is closer to the real waveform of the offshore area, which avoids the over-modification problem of Tseng method. The sea-level measurements from tide gauge station and geoid height from EGM2008 model were used to validate the retracking strategy. Experimental results show that decontaminated waveform was more suitable than original and Tseng modified waveform and has uniform performance in both compare to the tide gauge and geoid. The retrieved altimetry data in the 0-1km and 1-7km coastal zone indicate that threshold retracker with decontaminated waveform have STD of 73.8cm and 33cm as compared with in situ gauge data,which correspond to 62.1% and 58% in precession compared to the unretracked altimetry measurements. The retracked SSHs are better in two coastal (0-1 km and 1-7km) zones, which have STD of 11.9cm and 22.7cm as compared with geoid height. Furthermore, the comparisons shows that the precision of decontaminated technology improve 0.3cm and 3.3cm than the best result of PISTACH product in coastal sea. This work is supported by the National Natural Science Foundation of China (Grant Nos. 41174020, 41174021, 41131067) and the open fund of Guangxi Key Laboratory of Spatial Information and Geometrics (Grant No. 15-140-07-26). Index Terms: retracking, Envisat, Jason-2, Coastal sea, decontamination.

  19. Resolving the Detailed Spatiotemporal Slip Evolution of Deep Tremor in Western Japan

    NASA Astrophysics Data System (ADS)

    Ohta, Kazuaki; Ide, Satoshi

    2017-12-01

    We study the detailed spatiotemporal behavior of deep tremor in western Japan through the development and application of a new slip inversion method. Although many studies now recognize tremor as shear slip along the plate interface manifested in low-frequency earthquake (LFE) swarms, a conventional slip inversion analysis is not available for tremor due to insufficient knowledge of source locations and Green's functions. Here we introduce synthetic template waveforms, which are typical tremor waveforms obtained by stacking LFE seismograms at arranged points along the plate interface. Using these synthetic template waveforms as substitutes for Green's functions, we invert the continuous tremor waveforms using an iterative deconvolution approach with Bayesian constraints. We apply this method to two tremor burst episodes in western and central Shikoku, Japan. The estimated slip distribution from a 12 day tremor burst episode in western Shikoku is heterogeneous, with several patchy areas of slip along the plate interface where rapid moment releases with durations of <100 s regularly occur. We attribute these heterogeneous spatiotemporal slip patterns to heterogeneous material properties along the plate interface. For central Shikoku, where we focus on a tremor burst episode that occurred coincidentally with a very low frequency earthquake (VLF), we observe that the source size of the VLF is much larger than that estimated from tremor activity in western Shikoku. These differences in the size of the slip region may dictate the visibility of VLF signals in observed seismograms, which has implications for the mechanics of slow earthquakes and subduction zone processes.

  20. Resolvability of regional density structure

    NASA Astrophysics Data System (ADS)

    Plonka, A.; Fichtner, A.

    2016-12-01

    Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convectivemotion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravityprovide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling,making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assessif 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within thecrust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we performprincipal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish theextent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrainedindependently. Since the density imprint we observe is not exclusively linked to travel times and amplitudes of specific phases,we consider waveform differences between complete seismograms. We test the method using a known smooth model of the crust and seismograms with clear Love and Rayleigh waves, showing that - as expected - the first principal kernel maximizes sensitivity to SH and SV velocity structure, respectively, and that the leakage between S velocity, P velocity and density parameter spaces is minimal in the chosen setup. Next, we apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density, potentially allowing for independent density resolution, and, as the final goal, for direct density inversion.

  1. Seismic waveform inversion using neural networks

    NASA Astrophysics Data System (ADS)

    De Wit, R. W.; Trampert, J.

    2012-12-01

    Full waveform tomography aims to extract all available information on Earth structure and seismic sources from seismograms. The strongly non-linear nature of this inverse problem is often addressed through simplifying assumptions for the physical theory or data selection, thus potentially neglecting valuable information. Furthermore, the assessment of the quality of the inferred model is often lacking. This calls for the development of methods that fully appreciate the non-linear nature of the inverse problem, whilst providing a quantification of the uncertainties in the final model. We propose to invert seismic waveforms in a fully non-linear way by using artificial neural networks. Neural networks can be viewed as powerful and flexible non-linear filters. They are very common in speech, handwriting and pattern recognition. Mixture Density Networks (MDN) allow us to obtain marginal posterior probability density functions (pdfs) of all model parameters, conditioned on the data. An MDN can approximate an arbitrary conditional pdf as a linear combination of Gaussian kernels. Seismograms serve as input, Earth structure parameters are the so-called targets and network training aims to learn the relationship between input and targets. The network is trained on a large synthetic data set, which we construct by drawing many random Earth models from a prior model pdf and solving the forward problem for each of these models, thus generating synthetic seismograms. As a first step, we aim to construct a 1D Earth model. Training sets are constructed using the Mineos package, which computes synthetic seismograms in a spherically symmetric non-rotating Earth by summing normal modes. We train a network on the body waveforms present in these seismograms. Once the network has been trained, it can be presented with new unseen input data, in our case the body waves in real seismograms. We thus obtain the posterior pdf which represents our final state of knowledge given the information in the training set and the real data.

  2. Focal mechanism determination for induced seismicity using the neighbourhood algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Yuyang; Zhang, Haijiang; Li, Junlun; Yin, Chen; Wu, Furong

    2018-06-01

    Induced seismicity is widely detected during hydraulic fracture stimulation. To better understand the fracturing process, a thorough knowledge of the source mechanism is required. In this study, we develop a new method to determine the focal mechanism for induced seismicity. Three misfit functions are used in our method to measure the differences between observed and modeled data from different aspects, including the waveform, P wave polarity and S/P amplitude ratio. We minimize these misfit functions simultaneously using the neighbourhood algorithm. Through synthetic data tests, we show the ability of our method to yield reliable focal mechanism solutions and study the effect of velocity inaccuracy and location error on the solutions. To mitigate the impact of the uncertainties, we develop a joint inversion method to find the optimal source depth and focal mechanism simultaneously. Using the proposed method, we determine the focal mechanisms of 40 stimulation induced seismic events in an oil/gas field in Oman. By investigating the results, we find that the reactivation of pre-existing faults is the main cause of the induced seismicity in the monitored area. Other observations obtained from the focal mechanism solutions are also consistent with earlier studies in the same area.

  3. System-on-chip architecture and validation for real-time transceiver optimization: APC implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Suarez, Hernan; Zhang, Yan R.

    2015-05-01

    New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.

  4. Viscoelastic property identification from waveform reconstruction

    NASA Astrophysics Data System (ADS)

    Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.

    2002-05-01

    An inverse method is proposed for the determination of the viscoelastic properties of material plates from the plane-wave transmitted acoustic field. Innovations lie in a two-step inversion scheme based on the well-known maximum-likelihood principle with an analytic signal formulation. In addition, establishing the analytical formulations of the plate transmission coefficient we implement an efficient and slightly noise-sensitive process suited to both very thin plates and strongly dispersive media.

  5. High-resolution moisture profiles from full-waveform probabilistic inversion of TDR signals

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Huisman, Johan Alexander; Jacques, Diederik

    2014-11-01

    This study presents an novel Bayesian inversion scheme for high-dimensional undetermined TDR waveform inversion. The methodology quantifies uncertainty in the moisture content distribution, using a Gaussian Markov random field (GMRF) prior as regularization operator. A spatial resolution of 1 cm along a 70-cm long TDR probe is considered for the inferred moisture content. Numerical testing shows that the proposed inversion approach works very well in case of a perfect model and Gaussian measurement errors. Real-world application results are generally satisfying. For a series of TDR measurements made during imbibition and evaporation from a laboratory soil column, the average root-mean-square error (RMSE) between maximum a posteriori (MAP) moisture distribution and reference TDR measurements is 0.04 cm3 cm-3. This RMSE value reduces to less than 0.02 cm3 cm-3 for a field application in a podzol soil. The observed model-data discrepancies are primarily due to model inadequacy, such as our simplified modeling of the bulk soil electrical conductivity profile. Among the important issues that should be addressed in future work are the explicit inference of the soil electrical conductivity profile along with the other sampled variables, the modeling of the temperature-dependence of the coaxial cable properties and the definition of an appropriate statistical model of the residual errors.

  6. Waveform inversion of very long period impulsive signals associated with magmatic injection beneath Kilauea Volcano, Hawaii

    USGS Publications Warehouse

    Ohminato, T.; Chouet, B.A.; Dawson, P.; Kedar, S.

    1998-01-01

    We use data from broadband seismometers deployed around the summit of Kilauea Volcano to quantify the mechanism associated with a transient in the flow of magma feeding the east rift eruption of the volcano. The transient is marked by rapid inflation of the Kilauea summit peaking at 22 ??rad 4.5 hours after the event onset, followed by slow deflation over a period of 3 days. Superimposed on the summit inflation is a series of sawtooth displacement pulses, each characterized by a sudden drop in amplitude lasting 5-10 s followed by an exponential recovery lasting 1-3 min. The sawtooth waveforms display almost identical shapes, suggesting a process involving the repeated activation of a fixed source. The particle motion associated with each sawtooth is almost linear, and its major swing shows compressional motion at all stations. Analyses of semblance and particle motion are consistent with a point source located 1 km beneath the northeast edge of the Halemaumau pit crater. To estimate the source mechanism, we apply a moment tensor inversion to the waveform data, assuming a point source embedded in a homogeneous half-space with compressional and shear wave velocities representative of the average medium properties at shallow depth under Kilauea. Synthetic waveforms are constructed by a superposition of impulse responses for six moment tensor components and three single force components. The origin times of individual impulses are distributed along the time axis at appropriately small, equal intervals, and their amplitudes are determined by least squares. In this inversion, the source time functions of the six tensor and three force components are determined simultaneously. We confirm the accuracy of the inversion method through a series of numerical tests. The results from the inversion show that the waveform data are well explained by a pulsating transport mechanism operating on a subhorizontal crack linking the summit reservoir to the east rift of Kilauea. The crack acts like a buffer in which a batch of fluid (magma and/or gas) accumulates over a period of 1-3 min before being rapidly injected into a larger reservoir (possibly the east rift) over a timescale of 5-10 s. The seismic moment and volume change associated with a typical batch of fluid are approximately 1014 N m and 3000 m3, respectively. Our results also point to the existence of a single force component with amplitude of 109 N, which may be explained as the drag force generated by the flow of viscous magma through a narrow constriction in the flow path. The total volume of magma associated with the 4.5-hour-long activation of the pulsating source is roughly 500,000 m3 in good agreement with the integrated volume flow rate of magma estimated near the eruptive site.

  7. 3D frequency-domain finite-difference modeling of acoustic wave propagation

    NASA Astrophysics Data System (ADS)

    Operto, S.; Virieux, J.

    2006-12-01

    We present a 3D frequency-domain finite-difference method for acoustic wave propagation modeling. This method is developed as a tool to perform 3D frequency-domain full-waveform inversion of wide-angle seismic data. For wide-angle data, frequency-domain full-waveform inversion can be applied only to few discrete frequencies to develop reliable velocity model. Frequency-domain finite-difference (FD) modeling of wave propagation requires resolution of a huge sparse system of linear equations. If this system can be solved with a direct method, solutions for multiple sources can be computed efficiently once the underlying matrix has been factorized. The drawback of the direct method is the memory requirement resulting from the fill-in of the matrix during factorization. We assess in this study whether representative problems can be addressed in 3D geometry with such approach. We start from the velocity-stress formulation of the 3D acoustic wave equation. The spatial derivatives are discretized with second-order accurate staggered-grid stencil on different coordinate systems such that the axis span over as many directions as possible. Once the discrete equations were developed on each coordinate system, the particle velocity fields are eliminated from the first-order hyperbolic system (following the so-called parsimonious staggered-grid method) leading to second-order elliptic wave equations in pressure. The second-order wave equations discretized on each coordinate system are combined linearly to mitigate the numerical anisotropy. Secondly, grid dispersion is minimized by replacing the mass term at the collocation point by its weighted averaging over all the grid points of the stencil. Use of second-order accurate staggered- grid stencil allows to reduce the bandwidth of the matrix to be factorized. The final stencil incorporates 27 points. Absorbing conditions are PML. The system is solved using the parallel direct solver MUMPS developed for distributed-memory computers. The MUMPS solver is based on a multifrontal method for LU factorization. We used the METIS algorithm to perform re-ordering of the matrix coefficients before factorization. Four grid points per minimum wavelength is used for discretization. We applied our algorithm to the 3D SEG/EAGE synthetic onshore OVERTHRUST model of dimensions 20 x 20 x 4.65 km. The velocities range between 2 and 6 km/s. We performed the simulations using 192 processors with 2 Gbytes of RAM memory per processor. We performed simulations for the 5 Hz, 7 Hz and 10 Hz frequencies in some fractions of the OVERTHRUST model. The grid interval was 100 m, 75 m and 50 m respectively. The grid dimensions were 207x207x53, 275x218x71 and 409x109x102 respectively corresponding to 100, 80 and 25 percents of the model respectively. The time for factorization is 20 mn, 108 mn and 163 mn respectively. The time for resolution was 3.8, 9.3 and 10.3 s per source. The total memory used during factorization is 143, 384 and 449 Gbytes respectively. One can note the huge memory requirement for factorization and the efficiency of the direct method to compute solutions for a large number of sources. This highlights the respective drawback and merit of the frequency-domain approach with respect to the time- domain counterpart. These results show that 3D acoustic frequency-domain wave propagation modeling can be performed at low frequencies using direct solver on large clusters of Pcs. This forward modeling algorithm may be used in the future as a tool to image the first kilometers of the crust by frequency-domain full-waveform inversion. For larger problems, we will use the out-of-core memory during factorization that has been implemented by the authors of MUMPS.

  8. The Algorithm Theoretical Basis Document for the Derivation of Range and Range Distributions from Laser Pulse Waveform Analysis for Surface Elevations, Roughness, Slope, and Vegetation Heights

    NASA Technical Reports Server (NTRS)

    Brenner, Anita C.; Zwally, H. Jay; Bentley, Charles R.; Csatho, Bea M.; Harding, David J.; Hofton, Michelle A.; Minster, Jean-Bernard; Roberts, LeeAnne; Saba, Jack L.; Thomas, Robert H.; hide

    2012-01-01

    The primary purpose of the GLAS instrument is to detect ice elevation changes over time which are used to derive changes in ice volume. Other objectives include measuring sea ice freeboard, ocean and land surface elevation, surface roughness, and canopy heights over land. This Algorithm Theoretical Basis Document (ATBD) describes the theory and implementation behind the algorithms used to produce the level 1B products for waveform parameters and global elevation and the level 2 products that are specific to ice sheet, sea ice, land, and ocean elevations respectively. These output products, are defined in detail along with the associated quality, and the constraints, and assumptions used to derive them.

  9. Direct position determination for digital modulation signals based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding

    2018-04-01

    The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.

  10. The Virtual Earthquake and Seismology Research Community e-science environment in Europe (VERCE) FP7-INFRA-2011-2 project

    NASA Astrophysics Data System (ADS)

    Vilotte, J.-P.; Atkinson, M.; Michelini, A.; Igel, H.; van Eck, T.

    2012-04-01

    Increasingly dense seismic and geodetic networks are continuously transmitting a growing wealth of data from around the world. The multi-use of these data leaded the seismological community to pioneer globally distributed open-access data infrastructures, standard services and formats, e.g., the Federation of Digital Seismic Networks (FDSN) and the European Integrated Data Archives (EIDA). Our ability to acquire observational data outpaces our ability to manage, analyze and model them. Research in seismology is today facing a fundamental paradigm shift. Enabling advanced data-intensive analysis and modeling applications challenges conventional storage, computation and communication models and requires a new holistic approach. It is instrumental to exploit the cornucopia of data, and to guarantee optimal operation and design of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of the seismological data-intensive applications in data analysis and modeling. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of those applications, and integrating the data infrastructures with Grid, Cloud and HPC infrastructures. It will allow prototyping solutions for new use cases as they emerge within the European Plate Observatory Systems (EPOS), the ESFRI initiative of the solid Earth community. Computational seismology, and information management, is increasingly revolving around massive amounts of data that stem from: (1) the flood of data from the observational systems; (2) the flood of data from large-scale simulations and inversions; (3) the ability to economically store petabytes of data online; (4) the evolving Internet and Data-aware computing capabilities. As data-intensive applications are rapidly increasing in scale and complexity, they require additional services-oriented architectures offering a virtualization-based flexibility for complex and re-usable workflows. Scientific information management poses computer science challenges: acquisition, organization, query and visualization tasks scale almost linearly with the data volumes. Commonly used FTP-GREP metaphor allows today to scan gigabyte-sized datasets but will not work for scanning terabyte-sized continuous waveform datasets. New data analysis and modeling methods, exploiting the signal coherence within dense network arrays, are nonlinear. Pair-algorithms on N points scale as N2. Wave form inversion and stochastic simulations raise computing and data handling challenges These applications are unfeasible for tera-scale datasets without new parallel algorithms that use near-linear processing, storage and bandwidth, and that can exploit new computing paradigms enabled by the intersection of several technologies (HPC, parallel scalable database crawler, data-aware HPC). This issues will be discussed based on a number of core pilot data-intensive applications and use cases retained in VERCE. This core applications are related to: (1) data processing and data analysis methods based on correlation techniques; (2) cpu-intensive applications such as large-scale simulation of synthetic waveforms in complex earth systems, and full waveform inversion and tomography. We shall analyze their workflow and data flow, and their requirements for a new service-oriented architecture and a data-aware platform with services and tools. Finally, we will outline the importance of a new collaborative environment between seismology and computer science, together with the need for the emergence and the recognition of 'research technologists' mastering the evolving data-aware technologies and the data-intensive research goals in seismology.

  11. Seismic source inversion using Green's reciprocity and a 3-D structural model for the Japanese Islands

    NASA Astrophysics Data System (ADS)

    Simutė, S.; Fichtner, A.

    2015-12-01

    We present a feasibility study for seismic source inversions using a 3-D velocity model for the Japanese Islands. The approach involves numerically calculating 3-D Green's tensors, which is made efficient by exploiting Green's reciprocity. The rationale for 3-D seismic source inversion has several aspects. For structurally complex regions, such as the Japan area, it is necessary to account for 3-D Earth heterogeneities to prevent unknown structure polluting source solutions. In addition, earthquake source characterisation can serve as a means to delineate existing faults. Source parameters obtained for more realistic Earth models can then facilitate improvements in seismic tomography and early warning systems, which are particularly important for seismically active areas, such as Japan. We have created a database of numerically computed 3-D Green's reciprocals for a 40°× 40°× 600 km size area around the Japanese Archipelago for >150 broadband stations. For this we used a regional 3-D velocity model, recently obtained from full waveform inversion. The model includes attenuation and radial anisotropy and explains seismic waveform data for periods between 10 - 80 s generally well. The aim is to perform source inversions using the database of 3-D Green's tensors. As preliminary steps, we present initial concepts to address issues that are at the basis of our approach. We first investigate to which extent Green's reciprocity works in a discrete domain. Considering substantial amounts of computed Green's tensors we address storage requirements and file formatting. We discuss the importance of the initial source model, as an intelligent choice can substantially reduce the search volume. Possibilities to perform a Bayesian inversion and ways to move to finite source inversion are also explored.

  12. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.

  13. Seismic Evidence for Fluid/Gas Beneath the Mentawai Fore-Arc Basin, Central Sumatra

    NASA Astrophysics Data System (ADS)

    Huot, Gabriel; Singh, Satish C.

    2018-02-01

    Since 2004, there have been three great interplate earthquakes (Mw > 8.0) offshore Sumatra. In addition to rupturing the megathrust, these earthquakes might also have ruptured the backthrusts that bound the Andaman Islands to the Mentawai Islands toward the forearc basins. Here we apply a combination of traveltime tomography and seismic full waveform inversion to an ultralong offset seismic reflection profile from the Mentawai forearc basin, in the region of the 2007 Mw 8.4 Bengkulu earthquake. We perform a waveform inversion of far-offset data followed by a waveform inversion of near-offset data using the starting model derived from the traveltime tomography. Our results show the presence of a large, low-velocity anomaly above the backthrust. The seismic reflection image indicates that this low-velocity anomaly lies either within highly compacted sediments from the accretionary wedge or within highly deformed sediments from the forearc basin. The porosity estimation, using the effective medium theory, suggests that a small amount of gas (from 2 to 13%) or a significant amount of fluid (from 17 to 40%) could generate this low-velocity zone. The presence of fluids and the observation of bottom simulating reflector below a push-up ridge might be associated with mud diapirism. The fluids could originate locally from the dewatering of the sediments from the accretionary wedge or forearc basin. The high reflectivity of the backthrust in this region might also indicate deeper fluid origin, either from underplated sediments on the subduction interface or from the serpentinized mantle wedge.

  14. Tsunami Source Inversion Using Tide Gauge and DART Tsunami Waveforms of the 2017 Mw8.2 Mexico Earthquake

    NASA Astrophysics Data System (ADS)

    Adriano, Bruno; Fujii, Yushiro; Koshimura, Shunichi; Mas, Erick; Ruiz-Angulo, Angel; Estrada, Miguel

    2018-01-01

    On September 8, 2017 (UTC), a normal-fault earthquake occurred 87 km off the southeast coast of Mexico. This earthquake generated a tsunami that was recorded at coastal tide gauge and offshore buoy stations. First, we conducted a numerical tsunami simulation using a single-fault model to understand the tsunami characteristics near the rupture area, focusing on the nearby tide gauge stations. Second, the tsunami source of this event was estimated from inversion of tsunami waveforms recorded at six coastal stations and three buoys located in the deep ocean. Using the aftershock distribution within 1 day following the main shock, the fault plane orientation had a northeast dip direction (strike = 320°, dip = 77°, and rake =-92°). The results of the tsunami waveform inversion revealed that the fault area was 240 km × 90 km in size with most of the largest slip occurring on the middle and deepest segments of the fault. The maximum slip was 6.03 m from a 30 × 30 km2 segment that was 64.82 km deep at the center of the fault area. The estimated slip distribution showed that the main asperity was at the center of the fault area. The second asperity with an average slip of 5.5 m was found on the northwest-most segments. The estimated slip distribution yielded a seismic moment of 2.9 × 10^{21} Nm (Mw = 8.24), which was calculated assuming an average rigidity of 7× 10^{10} N/m2.

  15. Moment tensor inversion with three-dimensional sensor configuration of mining induced seismicity (Kiruna mine, Sweden)

    NASA Astrophysics Data System (ADS)

    Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian

    2018-06-01

    Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). A stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double-couple and full moment tensor with high frequency data, is very challenging. Moreover, the application to underground mining system requires accounting for the 3-D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3-D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in the presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to eight events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double-couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip and rake configurations of the double-couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.

  16. Moment Tensor Inversion with 3D sensor configuration of Mining Induced Seismicity (Kiruna mine, Sweden)

    NASA Astrophysics Data System (ADS)

    Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian

    2018-03-01

    Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). Stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double couple and full moment tensor with high frequency data is very challenging. Moreover, the application to underground mining system requires accounting for the 3D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to 8 events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip, rake configurations of the double couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.

  17. Waveform Optimization for Target Estimation by Cognitive Radar with Multiple Antennas.

    PubMed

    Yao, Yu; Zhao, Junhui; Wu, Lenan

    2018-05-29

    A new scheme based on Kalman filtering to optimize the waveforms of an adaptive multi-antenna radar system for target impulse response (TIR) estimation is presented. This work aims to improve the performance of TIR estimation by making use of the temporal correlation between successive received signals, and minimize the mean square error (MSE) of TIR estimation. The waveform design approach is based upon constant learning from the target feature at the receiver. Under the multiple antennas scenario, a dynamic feedback loop control system is established to real-time monitor the change in the target features extracted form received signals. The transmitter adapts its transmitted waveform to suit the time-invariant environment. Finally, the simulation results show that, as compared with the waveform design method based on the MAP criterion, the proposed waveform design algorithm is able to improve the performance of TIR estimation for extended targets with multiple iterations, and has a relatively lower level of complexity.

  18. Full moment tensor and source location inversion based on full waveform adjoint method

    NASA Astrophysics Data System (ADS)

    Morency, C.

    2012-12-01

    The development of high-performance computing and numerical techniques enabled global and regional tomography to reach high levels of precision, and seismic adjoint tomography has become a state-of-the-art tomographic technique. The method was successfully used for crustal tomography of Southern California (Tape et al., 2009) and Europe (Zhu et al., 2012). Here, I will focus on the determination of source parameters (full moment tensor and location) based on the same approach (Kim et al, 2011). The method relies on full wave simulations and takes advantage of the misfit between observed and synthetic seismograms. An adjoint wavefield is calculated by back-propagating the difference between observed and synthetics from the receivers to the source. The interaction between this adjoint wavefield and the regular forward wavefield helps define Frechet derivatives of the source parameters, that is, the sensitivity of the misfit with respect to the source parameters. Source parameters are then recovered by minimizing the misfit based on a conjugate gradient algorithm using the Frechet derivatives. First, I will demonstrate the method on synthetic cases before tackling events recorded at the Geysers. The velocity model used at the Geysers is based on the USGS 3D velocity model. Waveform datasets come from the Northern California Earthquake Data Center. Finally, I will discuss strategies to ultimately use this method to characterize smaller events for microseismic and induced seismicity monitoring. References: - Tape, C., Q. Liu, A. Maggi, and J. Tromp, 2009, Adjoint tomography of the Southern California crust: Science, 325, 988992. - Zhu, H., Bozdag, E., Peter, D., and Tromp, J., 2012, Structure of the European upper mantle revealed by adjoint method: Nature Geoscience, 5, 493-498. - Kim, Y., Q. Liu, and J. Tromp, 2011, Adjoint centroid-moment tensor inversions: Geophys. J. Int., 186, 264278. Prepared by LLNL under Contract DE-AC52-07NA27344.

  19. 3D elastic full waveform inversion: case study from a land seismic survey

    NASA Astrophysics Data System (ADS)

    Kormann, Jean; Marti, David; Rodriguez, Juan-Esteban; Marzan, Ignacio; Ferrer, Miguel; Gutierrez, Natalia; Farres, Albert; Hanzich, Mauricio; de la Puente, Josep; Carbonell, Ramon

    2016-04-01

    Full Waveform Inversion (FWI) is one of the most advanced processing methods that is recently reaching a mature state after years of solving theoretical and technical issues such as the non-uniqueness of the solution and harnessing the huge computational power required by realistic scenarios. BSIT (Barcelona Subsurface Imaging Tools, www.bsc.es/bsit) includes a FWI algorithm that can tackle with very complex problems involving large datasets. We present here the application of this system to a 3D dataset acquired to constrain the shallow subsurface. This is where the wavefield is the most complicated, because most of the wavefield conversions takes place in the shallow region and also because the media is much more laterally heterogeneous. With this in mind, at least isotropic elastic approximation would be suitable as kernel engine for FWI. The current study explores the possibilities to apply elastic isotropic FWI using only the vertical component of the recorded seismograms. The survey covers an area of 500×500 m2, and consists in a receivers grid of 10 m×20 m combined with a 250 kg accelerated weight-drop as source on a displaced grid of 20 m×20 m. One of the main challenges in this case study is the costly 3D modeling that includes topography and substantial free surface effects. FWI is applied to a data subset (shooting lines 4 to 12), and is performed for 3 frequencies ranging from 15 to 25 Hz. The starting models are obtained from travel-time tomography and the all computation is run on 75 nodes of Mare Nostrum supercomputer during 3 days. The resulting models provide a higher resolution of the subsurface structures, and show a good correlation with the available borehole measurements. FWI allows to extend in a reliable way this 1D knowledge (borehole) to 3D.

  20. P- and S-wave Receiver Function Imaging with Scattering Kernels

    NASA Astrophysics Data System (ADS)

    Hansen, S. M.; Schmandt, B.

    2017-12-01

    Full waveform inversion provides a flexible approach to the seismic parameter estimation problem and can account for the full physics of wave propagation using numeric simulations. However, this approach requires significant computational resources due to the demanding nature of solving the forward and adjoint problems. This issue is particularly acute for temporary passive-source seismic experiments (e.g. PASSCAL) that have traditionally relied on teleseismic earthquakes as sources resulting in a global scale forward problem. Various approximation strategies have been proposed to reduce the computational burden such as hybrid methods that embed a heterogeneous regional scale model in a 1D global model. In this study, we focus specifically on the problem of scattered wave imaging (migration) using both P- and S-wave receiver function data. The proposed method relies on body-wave scattering kernels that are derived from the adjoint data sensitivity kernels which are typically used for full waveform inversion. The forward problem is approximated using ray theory yielding a computationally efficient imaging algorithm that can resolve dipping and discontinuous velocity interfaces in 3D. From the imaging perspective, this approach is closely related to elastic reverse time migration. An energy stable finite-difference method is used to simulate elastic wave propagation in a 2D hypothetical subduction zone model. The resulting synthetic P- and S-wave receiver function datasets are used to validate the imaging method. The kernel images are compared with those generated by the Generalized Radon Transform (GRT) and Common Conversion Point stacking (CCP) methods. These results demonstrate the potential of the kernel imaging approach to constrain lithospheric structure in complex geologic environments with sufficiently dense recordings of teleseismic data. This is demonstrated using a receiver function dataset from the Central California Seismic Experiment which shows several dipping interfaces related to the tectonic assembly of this region. Figure 1. Scattering kernel examples for three receiver function phases. A) direct P-to-s (Ps), B) direct S-to-p and C) free-surface PP-to-s (PPs).

  1. Velocity models and images using full waveform inversion and reverse time migration for the offshore permafrost in the Canadian shelf of Beaufort Sea, Arctic

    NASA Astrophysics Data System (ADS)

    Kang, S. G.; Hong, J. K.; Jin, Y. K.; Kim, S.; Kim, Y. G.; Dallimore, S.; Riedel, M.; Shin, C.

    2015-12-01

    During Expedition ARA05C (from Aug 26 to Sep 19, 2014) on the Korean icebreaker RV ARAON, the multi-channel seismic (MCS) data were acquired on the outer shelf and slope of the Canadian Beaufort Sea to investigate distribution and internal geological structures of the offshore ice-bonded permafrost and gas hydrates, totaling 998 km L-km with 19,962 shots. The MCS data were recorded using a 1500 m long solid-type streamer with 120 channels. Shot and group spacing were 50 m and 12.5 m, respectively. Most MCS survey lines were designed perpendicular and parallel to the strike of the shelf break. Ice-bonded permafrost or ice-bearing sediments are widely distributed under the Beaufort Sea shelf, which have formed during periods of lower sea level when portions of the shelf less than ~100m water depth were an emergent coastal plain exposed to very cold surface. The seismic P-wave velocity is an important geophysical parameter for identifying the distribution of ice-bonded permafrost with high velocity in this area. Recently, full waveform inversion (FWI) and reverse time migration (RTM) are commonly used to delineate detailed seismic velocity information and seismic image of geological structures. FWI is a data fitting procedure based on wave field modeling and numerical analysis to extract quantitative geophysical parameters such as P-, S-wave velocities and density from seismic data. RTM based on 2-way wave equation is a useful technique to construct accurate seismic image with amplitude preserving of field data. In this study, we suggest two-dimensional P-wave velocity model (Figure.1) using the FWI algorithm to delineate the top and bottom boundaries of ice-bonded permafrost in the Canadian shelf of Beaufort Sea. In addition, we construct amplitude preserving migrated seismic image using RTM to interpret the geological history involved with the evolution of permafrost.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldridge, David F.

    A reciprocity theorem is an explicit mathematical relationship between two different wavefields that can exist within the same space - time configuration. Reciprocity theorems provi de the theoretical underpinning for mod ern full waveform inversion solutions, and also suggest practical strategies for speed ing up large - scale numerical modeling of geophysical datasets . In the present work, several previously - developed electromagnetic r eciprocity theorems are generalized to accommodate a broader range of medi um, source , and receiver types. Reciprocity relations enabling the interchange of various types of point sources and point receivers within a three - dimensionalmore » electromagnetic model are derived. Two numerical modeling algorithms in current use are successfully tested for adherence to reciprocity. Finally, the reciprocity theorem forms the point of departure for a lengthy derivation of electromagnetic Frechet derivatives. These mathe matical objects quantify the sensitivity of geophysical electromagnetic data to variatio ns in medium parameters, and thus constitute indispensable tools for solution of the full waveform inverse problem. ACKNOWLEDGEMENTS Sandia National Labor atories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000. Signif icant portions of the work reported herein were conducted under a Cooperative Research and Development Agreement (CRADA) between Sandia National Laboratories (SNL) and CARBO Ceramics Incorporated. The author acknowledges Mr. Chad Cannan and Mr. Terry Pa lisch of CARBO Ceramics, and Ms. Amy Halloran, manager of SNL's Geophysics and Atmospheric Sciences Department, for their interest in and encouragement of this work. Special thanks are due to Dr . Lewis C. Bartel ( recently retired from Sandia National Labo ratories and now a geophysical consultant ) and Dr. Chester J. Weiss (recently rejoined with Sandia National Laboratories) for many stimulating (and reciprocal!) discussions regar ding the topic at hand.« less

  3. Reappraisal of the 2010 Maule, 2014 Iquique, 2015 Illapel through Inversion of Geodetic Data and Tsunami Waveforms Using the Optimal Time Alignment (OTA) Method

    NASA Astrophysics Data System (ADS)

    Romano, F.; Lorito, S.; Piatanesi, A.; Volpe, M.; Lay, T.; Tolomei, C.; Murphy, S.; Tonini, R.; Escalante, C.; Castro, M. J.; Gonzalez-Vida, J. M.; Macias, J.

    2017-12-01

    The Chile subduction zone is one of the most seismically active regions in the world and it hosted a number of great tsunamigenic earthquakes in the past. In particular, during the last 7 years three M8+ earthquakes occurred nearby the Chilean coasts, that is the 2010 M8.8 Maule, the 2014 M8.1 Iquique, and the M8.3 2015 Illapel earthquakes. The rupture process of these earthquakes has been studied by using different kind of geophysical observations such as seismic, geodetic, and tsunami data; in particular, tsunami waveforms are important for constraining the slip on the offshore portion of the fault. However, it has been shown that forward modelling of tsunami data can be affected by unavailability of accurate bathymetric models, especially in the vicinity of the tide-gauges; and in the far field by water density gradients, ocean floor elasticity, or geopotential gravity changes, generally neglected. This could result in a mismatch between observed and predicted tsunami signals thus affecting the retrieved tsunami source image. Recently, a method has been proposed for automatic correction during the nonlinear inversion of the mismatch (optimal time alignment, OTA; Romano et al., GRL, 2016). Here, we present a reappraisal of the joint inversion of tsunami data with OTA procedure and geodetic data, for the Maule, Iquique, and Illapel earthquakes. We compare the results with those obtained by tsunami inversion without using OTA and with other published inversion results.

  4. Improving Pulse Rate Measurements during Random Motion Using a Wearable Multichannel Reflectance Photoplethysmograph.

    PubMed

    Warren, Kristen M; Harvey, Joshua R; Chon, Ki H; Mendelson, Yitzhak

    2016-03-07

    Photoplethysmographic (PPG) waveforms are used to acquire pulse rate (PR) measurements from pulsatile arterial blood volume. PPG waveforms are highly susceptible to motion artifacts (MA), limiting the implementation of PR measurements in mobile physiological monitoring devices. Previous studies have shown that multichannel photoplethysmograms can successfully acquire diverse signal information during simple, repetitive motion, leading to differences in motion tolerance across channels. In this paper, we investigate the performance of a custom-built multichannel forehead-mounted photoplethysmographic sensor under a variety of intense motion artifacts. We introduce an advanced multichannel template-matching algorithm that chooses the channel with the least motion artifact to calculate PR for each time instant. We show that for a wide variety of random motion, channels respond differently to motion artifacts, and the multichannel estimate outperforms single-channel estimates in terms of motion tolerance, signal quality, and PR errors. We have acquired 31 data sets consisting of PPG waveforms corrupted by random motion and show that the accuracy of PR measurements achieved was increased by up to 2.7 bpm when the multichannel-switching algorithm was compared to individual channels. The percentage of PR measurements with error ≤ 5 bpm during motion increased by 18.9% when the multichannel switching algorithm was compared to the mean PR from all channels. Moreover, our algorithm enables automatic selection of the best signal fidelity channel at each time point among the multichannel PPG data.

  5. Seismic source parameters of the induced seismicity at The Geysers geothermal area, California, by a generalized inversion approach

    NASA Astrophysics Data System (ADS)

    Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia

    2017-04-01

    The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.

  6. Imaging electrical conductivity, permeability, and/or permittivity contrasts using the Born Scattering Inversion (BSI)

    NASA Astrophysics Data System (ADS)

    Darrh, A.; Downs, C. M.; Poppeliers, C.

    2017-12-01

    Born Scattering Inversion (BSI) of electromagnetic (EM) data is a geophysical imaging methodology for mapping weak conductivity, permeability, and/or permittivity contrasts in the subsurface. The high computational cost of full waveform inversion is reduced by adopting the First Born Approximation for scattered EM fields. This linearizes the inverse problem in terms of Born scattering amplitudes for a set of effective EM body sources within a 3D imaging volume. Estimation of scatterer amplitudes is subsequently achieved by solving the normal equations. Our present BSI numerical experiments entail Fourier transforming real-valued synthetic EM data to the frequency-domain, and minimizing the L2 residual between complex-valued observed and predicted data. We are testing the ability of BSI to resolve simple scattering models. For our initial experiments, synthetic data are acquired by three-component (3C) electric field receivers distributed on a plane above a single point electric dipole within a homogeneous and isotropic wholespace. To suppress artifacts, candidate Born scatterer locations are confined to a volume beneath the receiver array. Also, we explore two different numerical linear algebra algorithms for solving the normal equations: Damped Least Squares (DLS), and Non-Negative Least Squares (NNLS). Results from NNLS accurately recover the source location only for a large dense 3C receiver array, but fail when the array is decimated, or is restricted to horizontal component data. Using all receiver stations and all components per station, NNLS results are relatively insensitive to a sub-sampled frequency spectrum, suggesting that coarse frequency-domain sampling may be adequate for good target resolution. Results from DLS are insensitive to diminishing array density, but contain spatially oscillatory structure. DLS-generated images are consistently centered at the known point source location, despite an abundance of surrounding structure.

  7. Doubling the spectrum of time-domain induced polarization by harmonic de-noising, drift correction, spike removal, tapered gating and data uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Olsson, Per-Ivar; Fiandaca, Gianluca; Larsen, Jakob Juul; Dahlin, Torleif; Auken, Esben

    2016-11-01

    The extraction of spectral information in the inversion process of time-domain (TD) induced polarization (IP) data is changing the use of the TDIP method. Data interpretation is evolving from a qualitative description of the subsurface, able only to discriminate the presence of contrasts in chargeability parameters, towards a quantitative analysis of the investigated media, which allows for detailed soil- and rock-type characterization. Two major limitations restrict the extraction of the spectral information of TDIP data in the field: (i) the difficulty of acquiring reliable early-time measurements in the millisecond range and (ii) the self-potential background drift in the measured potentials distorting the shape of the late-time IP responses, in the second range. Recent developments in TDIP acquisition equipment have given access to full-waveform recordings of measured potentials and transmitted current, opening for a breakthrough in data processing. For measuring at early times, we developed a new method for removing the significant noise from power lines contained in the data through a model-based approach, localizing the fundamental frequency of the power-line signal in the full-waveform IP recordings. By this, we cancel both the fundamental signal and its harmonics. Furthermore, an efficient processing scheme for identifying and removing spikes in TDIP data was developed. The noise cancellation and the de-spiking allow the use of earlier and narrower gates, down to a few milliseconds after the current turn-off. In addition, tapered windows are used in the final gating of IP data, allowing the use of wider and overlapping gates for higher noise suppression with minimal distortion of the signal. For measuring at late times, we have developed an algorithm for removal of the self-potential drift. Usually constant or linear drift-removal algorithms are used, but these algorithms often fail in removing the background potentials present when the electrodes used for potential readings are previously used for current injection, also for simple contact resistance measurements. We developed a drift-removal scheme that models the polarization effect and efficiently allows for preserving the shape of the IP responses at late times. Uncertainty estimates are essential in the inversion of IP data. Therefore, in the final step of the data processing, we estimate the data standard deviation based on the data variability within the IP gates and the misfit of the background drift removal Overall, the removal of harmonic noise, spikes, self-potential drift, tapered windowing and the uncertainty estimation allows for doubling the usable range of TDIP data to almost four decades in time (corresponding to four decades in frequency), which will significantly advance the applicability of the IP method.

  8. Salvus: A flexible open-source package for waveform modelling and inversion from laboratory to global scales

    NASA Astrophysics Data System (ADS)

    Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.

    2016-12-01

    Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Based on a high order finite (spectral) element discretization, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.

  9. Determination of source process and the tsunami simulation of the 2013 Santa Cruz earthquake

    NASA Astrophysics Data System (ADS)

    Park, S. C.; Lee, J. W.; Park, E.; Kim, S.

    2014-12-01

    In order to understand the characteristics of large tsunamigenic earthquakes, we analyzed the earthquake source process of the 2013 Santa Cruz earthquake and simulated the following tsunami. We first estimated the fault length of about 200 km using 3-day aftershock distribution and the source duration of about 110 seconds using the duration of high-frequency energy radiation (Hara, 2007). Moment magnitude was estimated to be 8.0 using the formula of Hara (2007). From the results of 200 km of fault length and 110 seconds of source duration, we used the initial value of rupture velocity as 1.8 km/s for teleseismic waveform inversions. Teleseismic body wave inversion was carried out using the inversion package by Kikuchi and Kanamori (1991). Teleseismic P waveform data from 14 stations were used and band-pass filter of 0.005 ~ 1 Hz was applied. Our best-fit solution indicated that the earthquake occurred on the northwesterly striking (strike = 305) and shallowly dipping (dip = 13) fault plane. Focal depth was determined to be 23 km indicating shallow event. Moment magnitude of 7.8 was obtained showing somewhat smaller than the result obtained above and that of previous study (Lay et al., 2013). Large slip area was seen around the hypocenter. Using the slip distribution obtained by teleseismic waveform inversion, we calculated the surface deformations using formulas of Okada (1985) assuming as the initial change of sea water by tsunami. Then tsunami simulation was carred out using Conell Multi-grid Coupled Tsunami Model (COMCOT) code and 1 min-grid topographic data for water depth from the General Bathymetric Chart of the Ocenas (GEBCO). According to the tsunami simulation, most of tsunami waves propagated to the directions of southwest and northeast which are perpendicular to the fault strike. DART buoy data were used to verify our simulation. In the presentation, we will discuss more details on the results of source process and tsunami simulation and compare them with the previous study.

  10. Salvus: A flexible high-performance and open-source package for waveform modelling and inversion from laboratory to global scales

    NASA Astrophysics Data System (ADS)

    Afanasiev, Michael; Boehm, Christian; van Driel, Martin; Krischer, Lion; May, Dave; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Currently based on an abstract implementation of high order finite (spectral) elements, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. viscoelastic, coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ template mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.

  11. Seismic moment tensor inversion using 3D velocity model and its application to the 2013 Lushan earthquake sequence

    NASA Astrophysics Data System (ADS)

    Zhu, Lupei; Zhou, Xiaofeng

    2016-10-01

    Source inversion of small-magnitude events such as aftershocks or mine collapses requires use of relatively high frequency seismic waveforms which are strongly affected by small-scale heterogeneities in the crust. In this study, we developed a new inversion method called gCAP3D for determining general moment tensor of a seismic source using Green's functions of 3D models. It inherits the advantageous features of the ;Cut-and-Paste; (CAP) method to break a full seismogram into the Pnl and surface-wave segments and to allow time shift between observed and predicted waveforms. It uses grid search for 5 source parameters (relative strengths of the isotropic and compensated-linear-vector-dipole components and the strike, dip, and rake of the double-couple component) that minimize the waveform misfit. The scalar moment is estimated using the ratio of L2 norms of the data and synthetics. Focal depth can also be determined by repeating the inversion at different depths. We applied gCAP3D to the 2013 Ms 7.0 Lushan earthquake and its aftershocks using a 3D crustal-upper mantle velocity model derived from ambient noise tomography in the region. We first relocated the events using the double-difference method. We then used the finite-differences method and reciprocity principle to calculate Green's functions of the 3D model for 20 permanent broadband seismic stations within 200 km from the source region. We obtained moment tensors of the mainshock and 74 aftershocks ranging from Mw 5.2 to 3.4. The results show that the Lushan earthquake is a reverse faulting at a depth of 13-15 km on a plane dipping 40-47° to N46° W. Most of the aftershocks occurred off the main rupture plane and have similar focal mechanisms to the mainshock's, except in the proximity of the mainshock where the aftershocks' focal mechanisms display some variations.

  12. Transdimensional inversion of scattered body waves for 1D S-wave velocity structure - Application to the Tengchong volcanic area, Southwestern China

    NASA Astrophysics Data System (ADS)

    Li, Mengkui; Zhang, Shuangxi; Bodin, Thomas; Lin, Xu; Wu, Tengfei

    2018-06-01

    Inversion of receiver functions is commonly used to recover the S-wave velocity structure beneath seismic stations. Traditional approaches are based on deconvolved waveforms, where the horizontal component of P-wave seismograms is deconvolved by the vertical component. Deconvolution of noisy seismograms is a numerically unstable process that needs to be stabilized by regularization parameters. This biases noise statistics, making it difficult to estimate uncertainties in observed receiver functions for Bayesian inference. This study proposes a method to directly invert observed radial waveforms and to better account for data noise in a Bayesian formulation. We illustrate its feasibility with two synthetic tests having different types of noises added to seismograms. Then, a real site application is performed to obtain the 1-D S-wave velocity structure beneath a seismic station located in the Tengchong volcanic area, Southwestern China. Surface wave dispersion measurements spanning periods from 8 to 65 s are jointly inverted with P waveforms. The results show a complex S-wave velocity structure, as two low velocity zones are observed in the crust and uppermost mantle, suggesting the existence of magma chambers, or zones of partial melt. The upper magma chambers may be the heart source that cause the thermal activity on the surface.

  13. Full-waveform inversion of surface waves in exploration geophysics

    NASA Astrophysics Data System (ADS)

    Borisov, D.; Gao, F.; Williamson, P.; Tromp, J.

    2017-12-01

    Full-waveform inversion (FWI) is a data fitting approach to estimate high-resolution properties of the Earth from seismic data by minimizing the misfit between observed and calculated seismograms. In land seismics, the source on the ground generates high-amplitude surface waves, which generally represent most of the energy recorded by ground sensors. Although surface waves are widely used in global seismology and engineering studies, they are typically treated as noise within the seismic exploration community since they mask deeper reflections from the intervals of exploration interest. This is mainly due to the fact that surface waves decay exponentially with depth and for a typical frequency range (≈[5-50] Hz) sample only the very shallow part of the subsurface, but also because they are much more sensitive to S-wave than P-wave velocities. In this study, we invert surface waves in the hope of using them as additional information for updating the near surface. In a heterogeneous medium, the main challenge of surface wave inversion is associated with their dispersive character, which makes it difficult to define a starting model for conventional FWI which can avoid cycle-skipping. The standard approach to dealing with this is by inverting the dispersion curves in the Fourier (f-k) domain to generate locally 1-D models, typically for the shear wavespeeds only. However this requires that the near-surface zone be more or less horizontally invariant over a sufficient distance for the spatial Fourier transform to be applicable. In regions with significant topography, such as foothills, this is not the case, so we revert to the time-space domain, but aim to minimize the differences of envelopes in the early stages of the inversion to resolve the cycle-skipping issue. Once the model is good enough, we revert to the classic waveform-difference inversion. We first present a few synthetic examples. We show that classical FWI might be trapped in a local minimum even for relatively simple scenario, while FWI with envelopes is stable and can converge using an inaccurate starting model. We also perform resolution analysis using a checkerboard test. We then present a field example. The final shear wavespeed model is compared to the results from the inversion of dispersion curves.

  14. Comparison of weighting techniques for acoustic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo

    2017-12-01

    To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.

  15. Inversion of ground-motion data from a seismometer array for rotation using a modification of Jaeger's method

    USGS Publications Warehouse

    Chi, Wu-Cheng; Lee, W.H.K.; Aston, J.A.D.; Lin, C.J.; Liu, C.-C.

    2011-01-01

    We develop a new way to invert 2D translational waveforms using Jaeger's (1969) formula to derive rotational ground motions about one axis and estimate the errors in them using techniques from statistical multivariate analysis. This procedure can be used to derive rotational ground motions and strains using arrayed translational data, thus providing an efficient way to calibrate the performance of rotational sensors. This approach does not require a priori information about the noise level of the translational data and elastic properties of the media. This new procedure also provides estimates of the standard deviations of the derived rotations and strains. In this study, we validated this code using synthetic translational waveforms from a seismic array. The results after the inversion of the synthetics for rotations were almost identical with the results derived using a well-tested inversion procedure by Spudich and Fletcher (2009). This new 2D procedure can be applied three times to obtain the full, three-component rotations. Additional modifications can be implemented to the code in the future to study different features of the rotational ground motions and strains induced by the passage of seismic waves.

  16. Waveform-based Bayesian full moment tensor inversion and uncertainty determination for the induced seismicity in an oil/gas field

    NASA Astrophysics Data System (ADS)

    Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi

    2018-03-01

    Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.

  17. Advanced analysis of complex seismic waveforms to characterize the subsurface Earth structure

    NASA Astrophysics Data System (ADS)

    Jia, Tianxia

    2011-12-01

    This thesis includes three major parts, (1) Body wave analysis of mantle structure under the Calabria slab, (2) Spatial Average Coherency (SPAC) analysis of microtremor to characterize the subsurface structure in urban areas, and (3) Surface wave dispersion inversion for shear wave velocity structure. Although these three projects apply different techniques and investigate different parts of the Earth, their aims are the same, which is to better understand and characterize the subsurface Earth structure by analyzing complex seismic waveforms that are recorded on the Earth surface. My first project is body wave analysis of mantle structure under the Calabria slab. Its aim is to better understand the subduction structure of the Calabria slab by analyzing seismograms generated by natural earthquakes. The rollback and subduction of the Calabrian Arc beneath the southern Tyrrhenian Sea is a case study of slab morphology and slab-mantle interactions at short spatial scale. I analyzed the seismograms traversing the Calabrian slab and upper mantle wedge under the southern Tyrrhenian Sea through body wave dispersion, scattering and attenuation, which are recorded during the PASSCAL CAT/SCAN experiment. Compressional body waves exhibit dispersion correlating with slab paths, which is high-frequency components arrivals being delayed relative to low-frequency components. Body wave scattering and attenuation are also spatially correlated with slab paths. I used this correlation to estimate the positions of slab boundaries, and further suggested that the observed spatial variation in near-slab attenuation could be ascribed to mantle flow patterns around the slab. My second project is Spatial Average Coherency (SPAC) analysis of microtremors for subsurface structure characterization. Shear-wave velocity (Vs) information in soil and rock has been recognized as a critical parameter for site-specific ground motion prediction study, which is highly necessary for urban areas located in seismic active zones. SPAC analysis of microtremors provides an efficient way to estimate Vs structure. Compared with other Vs estimating methods, SPAC is noninvasive and does not require any active sources, and therefore, it is especially useful in big cities. I applied SPAC method in two urban areas. The first is the historic city, Charleston, South Carolina, where high levels of seismic hazard lead to great public concern. Accurate Vs information, therefore, is critical for seismic site classification and site response studies. The second SPAC study is in Manhattan, New York City, where depths of high velocity contrast and soil-to-bedrock are different along the island. The two experiments show that Vs structure could be estimated with good accuracy using SPAC method compared with borehole and other techniques. SPAC is proved to be an effective technique for Vs estimation in urban areas. One important issue in seismology is the inversion of subsurface structures from surface recordings of seismograms. My third project focuses on solving this complex geophysical inverse problems, specifically, surface wave phase velocity dispersion curve inversion for shear wave velocity. In addition to standard linear inversion, I developed advanced inversion techniques including joint inversion using borehole data as constrains, nonlinear inversion using Monte Carlo, and Simulated Annealing algorithms. One innovative way of solving the inverse problem is to make inference from the ensemble of all acceptable models. The statistical features of the ensemble provide a better way to characterize the Earth model.

  18. Double-Difference Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Orsvuran, R.; Bozdag, E.; Lei, W.; Tromp, J.

    2017-12-01

    The adjoint method allows us to incorporate full waveform simulations in inverse problems. Misfit functions play an important role in extracting the relevant information from seismic waveforms. In this study, our goal is to apply the Double-Difference (DD) methodology proposed by Yuan et al. (2016) to global adjoint tomography. Dense seismic networks, such as USArray, lead to higher-resolution seismic images underneath continents. However, the imbalanced distribution of stations and sources poses challenges in global ray coverage. We adapt double-difference multitaper measurements to global adjoint tomography. We normalize each DD measurement by its number of pairs, and if a measurement has no pair, as may frequently happen for data recorded at oceanic stations, classical multitaper measurements are used. As a result, the differential measurements and pair-wise weighting strategy help balance uneven global kernel coverage. Our initial experiments with minor- and major-arc surface waves show promising results, revealing more pronounced structure near dense networks while reducing the prominence of paths towards cluster of stations. We have started using this new measurement in global adjoint inversions, addressing azimuthal anisotropy in upper mantle. Meanwhile, we are working on combining the double-difference approach with instantaneous phase measurements to emphasize contributions of scattered waves in global inversions and extending it to body waves. We will present our results and discuss challenges and future directions in the context of global tomographic inversions.

  19. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the northeastern China/Korean Peninsula region where average plane-layered structure is well known and relatively laterally homogenous. Secondly, we will consider the Middle East where crustal and upper mantle structure is laterally heterogeneous due to recent and ongoing tectonism. If time allows we will investigate the efficacy of each method for retrieving source parameters from synthetic data generated using a three-dimensional model of seismic structure of the Middle East, where phase delays are known to arise from path-dependent structure.

  20. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  1. Resolution analysis of finite fault source inversion using one- and three-dimensional Green's functions 2. Combining seismic and geodetic data

    USGS Publications Warehouse

    Wald, D.J.; Graves, R.W.

    2001-01-01

    Using numerical tests for a prescribed heterogeneous earthquake slip distribution, we examine the importance of accurate Green's functions (GF) for finite fault source inversions which rely on coseismic GPS displacements and leveling line uplift alone and in combination with near-source strong ground motions. The static displacements, while sensitive to the three-dimensional (3-D) structure, are less so than seismic waveforms and thus are an important contribution, particularly when used in conjunction with waveform inversions. For numerical tests of an earthquake source and data distribution modeled after the 1994 Northridge earthquake, a joint geodetic and seismic inversion allows for reasonable recovery of the heterogeneous slip distribution on the fault. In contrast, inaccurate 3-D GFs or multiple 1-D GFs allow only partial recovery of the slip distribution given strong motion data alone. Likewise, using just the GPS and leveling line data requires significant smoothing for inversion stability, and hence, only a blurred vision of the prescribed slip is recovered. Although the half-space approximation for computing the surface static deformation field is no longer justifiable based on the high level of accuracy for current GPS data acquisition and the computed differences between 3-D and half-space surface displacements, a layered 1-D approximation to 3-D Earth structure provides adequate representation of the surface displacement field. However, even with the half-space approximation, geodetic data can provide additional slip resolution in the joint seismic and geodetic inversion provided a priori fault location and geometry are correct. Nevertheless, the sensitivity of the static displacements to the Earth structure begs caution for interpretation of surface displacements, particularly those recorded at monuments located in or near basin environments. Copyright 2001 by the American Geophysical Union.

  2. Advantages of the full-waveform inversion: real data example from the Polish Basin

    NASA Astrophysics Data System (ADS)

    Malinowski, M.; Operto, S.

    2006-12-01

    Modern acquisition techniques allow us to gather high-density seismic data even in case of crustal-scale investigations. In combination with increasing availability of computational resources (eg. PC clusters), this allow us to image the Earth's structure on much finer scale than offered by ray-theory based methods (like travel time tomography) by applying the full waveform inversion/tomography method (FWT). Recently, the FWT method was for the first time successfully applied to the real wide-aperture data: 100-km long OBS profile (Operto et al. 2006) and a 15-km long land profile (Operto et al. 2004, Ravaut et al., 2004). We present the results of the application of the FWT method to the GRUNDY 2003 experiment data, which is standing in between the scale of the mentioned datasets. This project was targeted at recognition of the pre-Zechstein strata within the Polish Basin. For a successful investigations relatively low-frequencies and wide-apertures were used. In the 50 by 10 km rectangular area ca. 800 RefTek 125 "Texan" stations with 4.5 Hz geophones were deployed, forming high-density central line (receiver spacing 100 m) and additional 4 parallel profiles. Previously the data were modelled using conventional methods: CDP processing and traveltime tomography. In order to utilise secondary arrivals, we used the frequency-domain FWT method of Pratt et al. (1998). The wide-aperture content of our data leads to a redundant wavenumber coverage which can be partially removed without loss of information by limiting the inversion to few frequencies only. The inversion proceeds by stepping from low to high frequencies and uses the model inferred for one component as the starting one for the next frequency. Before full waveform inversion, the data were preprocessed by QC editing, spectral deconvolution (whitening), bandpass filtering and muting in narrow window around the first arrival. Traveltime tomogram was choosen as the starting model for 2D waveform inversion. The model size was 50x10 km with 25 m FD grid step. We have selected 10 frequencies from 4 to 13 Hz. For each frequency 10 iterations were computed (on a Linux cluster). There is a clear improvement in resolution of the obtained tomographic images by exploiting the full wavefield. The model allows to predict also fairly well the observed seismograms and is consistent with both the geological horizons projected from industrial reflection profiles as well as check-shot velocity log. Benefits of FWT in application to our data seems to be clear: in one step, without the need for performing the forward raytracing modelling, we gained both the quasi-structural image (perturbational model) and the detailed velocity model. In this way we fully exploited the broad range of recorded offsets and reflection angles from pre- to postcritical ones for a successful imaging beneath the Zechstein strata.

  3. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  4. INVERSION OF SOURCE TIME FUNCTION USING BOREHOLE ARRAY SONIC WAVEFORMS. (R825225)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  5. Full waveform time domain solutions for source and induced magnetotelluric and controlled-source electromagnetic fields using quasi-equivalent time domain decomposition and GPU parallelization

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2015-12-01

    Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.

  6. The Sentinel-3 Surface Topography Mission (S-3 STM): Level 2 SAR Ocean Retracker

    NASA Astrophysics Data System (ADS)

    Dinardo, S.; Lucas, B.; Benveniste, J.

    2015-12-01

    The SRAL Radar Altimeter, on board of the ESA Mission Sentinel-3 (S-3), has the capacity to operate either in the Pulse-Limited Mode (also known as LRM) or in the novel Synthetic Aperture Radar (SAR) mode. Thanks to the initial results from SAR Altimetry obtained exploiting CryoSat-2 data, lately the interest by the scientific community in this new technology has significantly increased and consequently the definition of accurate processing methodologies (along with validation strategies) has now assumed a capital importance. In this paper, we present the algorithm proposed to retrieve from S-3 STM SAR return waveforms the standard ocean geophysical parameters (ocean topography, wave height and sigma nought) and the validation results that have been so far achieved exploiting the CryoSat-2 data as well as the simulated data. The inversion method (retracking) to extract from the return waveform the geophysical information is a curve best-fitting scheme based on the bounded Levenberg-Marquardt Least-Squares Estimation Method (LEVMAR-LSE). The S-3 STM SAR Ocean retracking algorithm adopts, as return waveform’s model, the “SAMOSA” model [Ray et al, 2014], named after the R&D project SAMOSA (led by Satoc and funded by ESA), in which it has been initially developed. The SAMOSA model is a physically-based model that offers a complete description of a SAR Altimeter return waveform from ocean surface, expressed in the form of maps of reflected power in Delay-Doppler space (also known as stack) or expressed as multilooked echoes. SAMOSA is able to account for an elliptical antenna pattern, mispointing errors in roll and yaw, surface scattering pattern, non-linear ocean wave statistics and spherical Earth surface effects. In spite of its truly comprehensive character, the SAMOSA model comes with a compact analytical formulation expressed in term of Modified Bessel functions. The specifications of the retracking algorithm have been gathered in a technical document (DPM) and delivered as baseline for industrial implementation. For operational needs, thanks to the fine tuning of the fitting library parameters and the usage of look-up table for Bessel functions computation, the CPU execution time was accelerated over 100 times and made the execution in par with real time. In the course of the ESA-funded project CryoSat+ for Ocean (CP4O), new technical evolutions for the algorithm have been proposed (as usage of PTR width look up table and application of a stack masking). One of the main outcomes of the CP4O project was that, with these latest evolutions, the SAMOSA SAR retracking was giving equivalent results to CNES CPP retracking prototype, which was built with a totally different approach, which enforces the validation results. Work actually is underway to align the industrial implementation with the last new evolutions. Further, in order to test the algorithm with a dataset as realistic as possible, a set of simulated Test Data Set (generated by S-3 STM End-to-End Simulator) has been created by CLS following the specifications as described in a test data set requirements document drafted by ESA. In this work, we will show the baseline algorithm details, the evolutions, the impact of the evolutions and the results obtained processing the CryoSat-2 data and the simulated test data set.

  7. Comparison of Decision Assist and Clinical Judgment of Experts for Prediction of Lifesaving Interventions

    DTIC Science & Technology

    2015-03-01

    min of pulse oximeter photopletysmograph waveforms and extracted features to predict LSIs. We compared this with clinical judgment of LSIs by...Curve (AUROC). We obtained clinical judgment of need for LSI from 405 expert clinicians in135 trauma patients. The pulse oximeter algorithm...15 min of pulse oximeter waveforms predicts the need for LSIs during initial trauma resuscitation as accurately as judgment of expert trauma

  8. Power Analysis of an Enterprise Wireless Communication Architecture

    DTIC Science & Technology

    2017-09-01

    easily plug a satellite-based communication module into the enterprise processor when needed. Once plugged-in, it automatically runs the corresponding...reduce the SWaP by using a singular processing/computing module to run user applications and to implement waveform algorithms. This approach would...GPP) technology improved enough to allow a wide variety of waveforms to run in the GPP; thus giving rise to the SDR (Brannon 2004). Today’s

  9. Rayleigh wave nonlinear inversion based on the Firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou

    2014-06-01

    Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.

  10. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    NASA Astrophysics Data System (ADS)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  11. Resolving the fine-scale velocity structure of continental hyperextension at the Deep Galicia Margin using full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Davy, R. G.; Morgan, J. V.; Minshull, T. A.; Bayrakci, G.; Bull, J. M.; Klaeschen, D.; Reston, T. J.; Sawyer, D. S.; Lymer, G.; Cresswell, D.

    2018-01-01

    Continental hyperextension during magma-poor rifting at the Deep Galicia Margin is characterized by a complex pattern of faulting, thin continental fault blocks and the serpentinization, with local exhumation, of mantle peridotites along the S-reflector, interpreted as a detachment surface. In order to understand fully the evolution of these features, it is important to image seismically the structure and to model the velocity structure to the greatest resolution possible. Traveltime tomography models have revealed the long-wavelength velocity structure of this hyperextended domain, but are often insufficient to match accurately the short-wavelength structure observed in reflection seismic imaging. Here, we demonstrate the application of 2-D time-domain acoustic full-waveform inversion (FWI) to deep-water seismic data collected at the Deep Galicia Margin, in order to attain a high-resolution velocity model of continental hyperextension. We have used several quality assurance procedures to assess the velocity model, including comparison of the observed and modeled waveforms, checkerboard tests, testing of parameter and inversion strategy and comparison with the migrated reflection image. Our final model exhibits an increase in the resolution of subsurface velocities, with particular improvement observed in the westernmost continental fault blocks, with a clear rotation of the velocity field to match steeply dipping reflectors. Across the S-reflector, there is a sharpening in the velocity contrast, with lower velocities beneath S indicative of preferential mantle serpentinization. This study supports the hypothesis that normal faulting acts to hydrate the upper-mantle peridotite, observed as a systematic decrease in seismic velocities, consistent with increased serpentinization. Our results confirm the feasibility of applying the FWI method to sparse, deep-water crustal data sets.

  12. First results from a full-waveform inversion of the African continent using Salvus

    NASA Astrophysics Data System (ADS)

    van Herwaarden, D. P.; Afanasiev, M.; Krischer, L.; Trampert, J.; Fichtner, A.

    2017-12-01

    We present the initial results from an elastic full-waveform inversion (FWI) of the African continent which is melded together within the framework of the Collaborative Seismic Earth Model (CSEM) project. The continent of Africa is one of the most geophysically interesting regions on the planet. More specifically, Africa contains the Afar Depression, which is the only place on Earth where incipient seafloor spreading is sub-aerially exposed, along with other anomalous features such as the topography in the south, and several smaller surface expressions such as the Cameroon Volcanic Line and Congo Basin. Despite its significance, relatively few tomographic images exist of Africa, and, as a result, the debate on the geophysical origins of Africa's anomalies is rich and ongoing. Tomographic images of Africa present unique challenges due to uneven station coverage: while tectonically active areas such as the Afar rift are well sampled, much of the continent exhibits a severe lack of seismic stations. And, while Africa is mostly surrounded by tectonically active spreading plate boundaries, the interior of the continent is seismically quiet. To mitigate such issues, our simulation domain is extended to include earthquakes occurring in the South Atlantic and along the western edge of South America. Waveform modelling and inversion is performed using Salvus, a flexible and high-performance software suite based on the spectral-element method. Recently acquired recordings from the AfricaArray and NARS seismic networks are used to complement data obtained from global networks. We hope that this new model presents a fresh high-resolution image of African geodynamic structure, and helps advance the debate regarding the causative mechanisms of its surface anomalies.

  13. A study on characterization of stratospheric aerosol and gas parameters with the spacecraft solar occultation experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1977-01-01

    Spacecraft remote sensing of stratospheric aerosol and ozone vertical profiles using the solar occultation experiment has been analyzed. A computer algorithm has been developed in which a two step inversion of the simulated data can be performed. The radiometric data are first inverted into a vertical extinction profile using a linear inversion algorithm. Then the multiwavelength extinction profiles are solved with a nonlinear least square algorithm to produce aerosol and ozone vertical profiles. Examples of inversion results are shown illustrating the resolution and noise sensitivity of the inversion algorithms.

  14. A model-based spike sorting algorithm for removing correlation artifacts in multi-neuron recordings.

    PubMed

    Pillow, Jonathan W; Shlens, Jonathon; Chichilnisky, E J; Simoncelli, Eero P

    2013-01-01

    We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call "binary pursuit". The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth.

  15. A Model-Based Spike Sorting Algorithm for Removing Correlation Artifacts in Multi-Neuron Recordings

    PubMed Central

    Chichilnisky, E. J.; Simoncelli, Eero P.

    2013-01-01

    We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call “binary pursuit”. The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth. PMID:23671583

  16. A non-linear induced polarization effect on transient electromagnetic soundings

    NASA Astrophysics Data System (ADS)

    Hallbauer-Zadorozhnaya, Valeriya Yu.; Santarato, Giovanni; Abu Zeid, Nasser; Bignardi, Samuel

    2016-10-01

    In a TEM survey conducted for characterizing the subsurface for geothermal purposes, a strong induced polarization effect was recorded in all collected data. Surprisingly, anomalous decay curves were obtained in part of the sites, whose shape depended on the repetition frequency of the exciting square waveform, i.e. on current pulse length. The Cole-Cole model, besides being not directly related to physical parameters of rocks, was found inappropriate to model the observed distortion, due to induced polarization, because this model is linear, i.e. it cannot fit any dependence on current pulse. This phenomenon was investigated and explained as due to the presence of membrane polarization linked to constrictivity of (fresh) water-saturated pores. An algorithm for mathematical modeling of TEM data was then developed to fit this behavior. The case history is then discussed: 1D inversion, which accommodates non-linear effects, produced models that agree quite satisfactorily with resistivity and chargeability models obtained by an electrical resistivity tomography carried out for comparison.

  17. On-Line Corrosion Monitoring of Plate Structures Based on Guided Wave Tomography Using Piezoelectric Sensors.

    PubMed

    Rao, Jing; Ratassepp, Madis; Lisevych, Danylo; Hamzah Caffoor, Mahadhir; Fan, Zheng

    2017-12-12

    Corrosion is a major safety and economic concern to various industries. In this paper, a novel ultrasonic guided wave tomography (GWT) system based on self-designed piezoelectric sensors is presented for on-line corrosion monitoring of large plate-like structures. Accurate thickness reconstruction of corrosion damages is achieved by using the dispersive regimes of selected guided waves and a reconstruction algorithm based on full waveform inversion (FWI). The system makes use of an array of miniaturised piezoelectric transducers that are capable of exciting and receiving highly dispersive A0 Lamb wave mode at low frequencies. The scattering from transducer array has been found to have a small effect on the thickness reconstruction. The efficiency and the accuracy of the new system have been demonstrated through continuous forced corrosion experiments. The FWI reconstructed thicknesses show good agreement with analytical predictions obtained by Faraday's law and laser measurements, and more importantly, the thickness images closely resemble the actual corrosion sites.

  18. Ground penetrating radar antenna system analysis for prediction of earth material properties

    USGS Publications Warehouse

    Oden, C.P.; Wright, D.L.; Powers, M.H.; Olhoeft, G.

    2005-01-01

    The electrical properties of the ground directly beneath a ground penetrating radar (GPR) antenna very close to the earth's surface (ground-coupled) must be known in order to predict the antenna response. In order to investigate changing antenna response with varying ground properties, a series of finite difference time domain (FDTD) simulations were made for a bi-static (fixed horizontal offset between transmitting and receiving antennas) antenna array over a homogeneous ground. We examine the viability of using an inversion algorithm based on the simulated received waveforms to estimate the material properties of the earth near the antennas. Our analysis shows that, for a constant antenna height above the earth, the amplitude of certain frequencies in the received signal can be used to invert for the permittivity and conductivity of the ground. Once the antenna response is known, then the wave field near the antenna can be determined and sharper images of the subsurface near the antenna can be made. ?? 2005 IEEE.

  19. Modern Workflow Full Waveform Inversion Applied to North America and the Northern Atlantic

    NASA Astrophysics Data System (ADS)

    Krischer, Lion; Fichtner, Andreas; Igel, Heiner

    2015-04-01

    We present the current state of a new seismic tomography model obtained using full waveform inversion of the crustal and upper mantle structure beneath North America and the Northern Atlantic, including the westernmost part of Europe. Parts of the eastern portion of the initial model consists of previous models by Fichtner et al. (2013) and Rickers et al. (2013). The final results of this study will contribute to the 'Comprehensive Earth Model' being developed by the Computational Seismology group at ETH Zurich. Significant challenges include the size of the domain, the uneven event and station coverage, and the strong east-west alignment of seismic ray paths across the North Atlantic. We use as much data as feasible, resulting in several thousand recordings per event depending on the receivers deployed at the earthquakes' origin times. To manage such projects in a reproducible and collaborative manner, we, as tomographers, should abandon ad-hoc scripts and one-time programs, and adopt sustainable and reusable solutions. Therefore we developed the LArge-scale Seismic Inversion Framework (LASIF - http://lasif.net), an open-source toolbox for managing seismic data in the context of non-linear iterative inversions that greatly reduces the time to research. Information on the applied processing, modelling, iterative model updating, what happened during each iteration, and so on are systematically archived. This results in a provenance record of the final model which in the end significantly enhances the reproducibility of iterative inversions. Additionally, tools for automated data download across different data centers, window selection, misfit measurements, parallel data processing, and input file generation for various forward solvers are provided.

  20. The NINJA-2 project: detecting and characterizing gravitational waveforms modelled using numerical binary black hole simulations

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ain, A.; Ajith, P.; Alemic, A.; Allen, B.; Allocca, A.; Amariutei, D.; Andersen, M.; Anderson, R.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Austin, L.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barbet, M.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Bauchrowitz, J.; Bauer, Th S.; Behnke, B.; Bejger, M.; Beker, M. G.; Belczynski, C.; Bell, A. S.; Bell, C.; Bergmann, G.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Beyersdorf, P. T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biscans, S.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bloemen, S.; Blom, M.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, Sukanta; Bosi, L.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Buchman, S.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burman, R.; Buskulic, D.; Buy, C.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castiglia, A.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Celerier, C.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C.; Colombini, M.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corpuz, A.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coughlin, S.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Dal Canton, T.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; Debreczeni, G.; Degallaix, J.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Donath, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dossa, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dwyer, S.; Eberle, T.; Edo, T.; Edwards, M.; Effler, A.; Eggenstein, H.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Endrőczi, G.; Essick, R.; Etzel, T.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fehrmann, H.; Fejer, M. M.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gair, J.; Gammaitoni, L.; Gaonkar, S.; Garufi, F.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, C.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Gräf, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Groot, P.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hart, M.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Hooper, S.; Hopkins, P.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Iyer, B. R.; Izumi, K.; Jacobson, M.; James, E.; Jang, H.; Jaranowski, P.; Ji, Y.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karlen, J.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Keiser, G. M.; Keitel, D.; Kelley, D. B.; Kells, W.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, C.; Kim, K.; Kim, N.; Kim, N. G.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Koehlenbeck, S.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kremin, A.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, A.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Kwee, P.; Landry, M.; Lantz, B.; Larson, S.; Lasky, P. D.; Lawrie, C.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C.-H.; Lee, H. K.; Lee, H. M.; Lee, J.; Leonardi, M.; Leong, J. R.; Le Roux, A.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B.; Lewis, J.; Li, T. G. F.; Libbrecht, K.; Libson, A.; Lin, A. C.; Littenberg, T. B.; Litvine, V.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Loew, K.; Logue, J.; Lombardi, A. L.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M. J.; Lück, H.; Luijten, E.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macarthur, J.; Macdonald, E. P.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Mageswaran, M.; Maglione, C.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Manca, G. M.; Mandel, I.; Mandic, V.; Mangano, V.; Mangini, N.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Martinelli, L.; Martynov, D.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; McLin, K.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meinders, M.; Melatos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyers, P.; Miao, H.; Michel, C.; Mikhailov, E. E.; Milano, L.; Milde, S.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Moesta, P.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nanda Kumar, D.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nelemans, G.; Neri, I.; Neri, M.; Newton, G.; Nguyen, T.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oppermann, P.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Padilla, C.; Pai, A.; Palashov, O.; Palomba, C.; Pan, H.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Paris, H.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poeld, J.; Poggiani, R.; Poteomkin, A.; Powell, J.; Prasad, J.; Premachandra, S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Qin, J.; Quetschke, V.; Quintero, E.; Quiroga, G.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rajalakshmi, G.; Rakhmanov, M.; Ramet, C.; Ramirez, K.; Rapagnani, P.; Raymond, V.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Reid, S.; Reitze, D. H.; Rhoades, E.; Ricci, F.; Riles, K.; Robertson, N. A.; Robinet, F.; Rocchi, A.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sanders, J. R.; Sannibale, V.; Santiago-Prieto, I.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R.; Scheuer, J.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Singh, R.; Sintes, A. M.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Son, E. J.; Sorazu, B.; Souradeep, T.; Sperandio, L.; Staley, A.; Stebbins, J.; Steinlechner, J.; Steinlechner, S.; Stephens, B. C.; Steplewski, S.; Stevenson, S.; Stone, R.; Stops, D.; Strain, K. A.; Straniero, N.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Urbanek, K.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Verma, S. S.; Vetrano, F.; Viceré, A.; Vincent-Finley, R.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vyachanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Wang, M.; Wang, X.; Ward, R. L.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Williams, K.; Williams, L.; Williams, R.; Williams, T.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yancey, C. C.; Yang, H.; Yang, Z.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, Fan; Zhang, L.; Zhao, C.; Zhu, X. J.; Zucker, M. E.; Zuraw, S.; Zweizig, J.; Boyle, M.; Brügmann, B.; Buchman, L. T.; Campanelli, M.; Chu, T.; Etienne, Z. B.; Hannam, M.; Healy, J.; Hinder, I.; Kidder, L. E.; Laguna, P.; Liu, Y. T.; London, L.; Lousto, C. O.; Lovelace, G.; MacDonald, I.; Marronetti, P.; Mösta, P.; Müller, D.; Mundim, B. C.; Nakano, H.; Paschalidis, V.; Pekowsky, L.; Pollney, D.; Pfeiffer, H. P.; Ponce, M.; Pürrer, M.; Reifenberger, G.; Reisswig, C.; Santamaría, L.; Scheel, M. A.; Shapiro, S. L.; Shoemaker, D.; Sopuerta, C. F.; Sperhake, U.; Szilágyi, B.; Taylor, N. W.; Tichy, W.; Tsatsin, P.; Zlochower, Y.

    2014-06-01

    The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave (GW) astrophysics communities. The purpose of NINJA is to study the ability to detect GWs emitted from merging binary black holes (BBH) and recover their parameters with next-generation GW observatories. We report here on the results of the second NINJA project, NINJA-2, which employs 60 complete BBH hybrid waveforms consisting of a numerical portion modelling the late inspiral, merger, and ringdown stitched to a post-Newtonian portion modelling the early inspiral. In a ‘blind injection challenge’ similar to that conducted in recent Laser Interferometer Gravitational Wave Observatory (LIGO) and Virgo science runs, we added seven hybrid waveforms to two months of data recoloured to predictions of Advanced LIGO (aLIGO) and Advanced Virgo (AdV) sensitivity curves during their first observing runs. The resulting data was analysed by GW detection algorithms and 6 of the waveforms were recovered with false alarm rates smaller than 1 in a thousand years. Parameter-estimation algorithms were run on each of these waveforms to explore the ability to constrain the masses, component angular momenta and sky position of these waveforms. We find that the strong degeneracy between the mass ratio and the BHs’ angular momenta will make it difficult to precisely estimate these parameters with aLIGO and AdV. We also perform a large-scale Monte Carlo study to assess the ability to recover each of the 60 hybrid waveforms with early aLIGO and AdV sensitivity curves. Our results predict that early aLIGO and AdV will have a volume-weighted average sensitive distance of 300 Mpc (1 Gpc) for 10M⊙ + 10M⊙ (50M⊙ + 50M⊙) BBH coalescences. We demonstrate that neglecting the component angular momenta in the waveform models used in matched-filtering will result in a reduction in sensitivity for systems with large component angular momenta. This reduction is estimated to be up to ˜15% for 50M⊙ + 50M⊙ BBH coalescences with almost maximal angular momenta aligned with the orbit when using early aLIGO and AdV sensitivity curves.

  1. Digital processing with single electrons for arbitrary waveform generation of current

    NASA Astrophysics Data System (ADS)

    Okazaki, Yuma; Nakamura, Shuji; Onomitsu, Koji; Kaneko, Nobu-Hisa

    2018-03-01

    We demonstrate arbitrary waveform generation of current using a GaAs-based single-electron pump. In our experiment, a digital processing algorithm known as delta-sigma modulation is incorporated into single-electron pumping to generate a density-modulated single-electron stream, by which we demonstrate the generation of arbitrary waveforms of current including sinusoidal, square, and triangular waves with a peak-to-peak amplitude of approximately 10 pA and an output bandwidth ranging from dc to close to 1 MHz. The developed current generator can be used as the precise and calculable current reference required for measurements of current noise in low-temperature experiments.

  2. A flexible, extendable, modular and computationally efficient approach to scattering-integral-based seismic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.; Lamara, S.

    2016-02-01

    We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be done using different mathematical approaches. Since kernels are stored on disk, it can be repeated many times for different regularization parameters without need to solve the forward problem, making the approach accessible to Occam's method. Changes of choice of misfit functional, weighting of data and selection of data subsets are still possible at this stage. We have coded our approach to FWI into a program package called ASKI (Analysis of Sensitivity and Kernel Inversion) which can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. It is written in modern FORTRAN language using object-oriented concepts that reflect the modular structure of the inversion procedure. We validate our FWI method by a small-scale synthetic study and present first results of its application to high-quality seismological data acquired in the southern Aegean.

  3. A masked least-squares smoothing procedure for artifact reduction in scanning-EMG recordings.

    PubMed

    Corera, Íñigo; Eciolaza, Adrián; Rubio, Oliver; Malanda, Armando; Rodríguez-Falces, Javier; Navallas, Javier

    2018-01-11

    Scanning-EMG is an electrophysiological technique in which the electrical activity of the motor unit is recorded at multiple points along a corridor crossing the motor unit territory. Correct analysis of the scanning-EMG signal requires prior elimination of interference from nearby motor units. Although the traditional processing based on the median filtering is effective in removing such interference, it distorts the physiological waveform of the scanning-EMG signal. In this study, we describe a new scanning-EMG signal processing algorithm that preserves the physiological signal waveform while effectively removing interference from other motor units. To obtain a cleaned-up version of the scanning signal, the masked least-squares smoothing (MLSS) algorithm recalculates and replaces each sample value of the signal using a least-squares smoothing in the spatial dimension, taking into account the information of only those samples that are not contaminated with activity of other motor units. The performance of the new algorithm with simulated scanning-EMG signals is studied and compared with the performance of the median algorithm and tested with real scanning signals. Results show that the MLSS algorithm distorts the waveform of the scanning-EMG signal much less than the median algorithm (approximately 3.5 dB gain), being at the same time very effective at removing interference components. Graphical Abstract The raw scanning-EMG signal (left figure) is processed by the MLSS algorithm in order to remove the artifact interference. Firstly, artifacts are detected from the raw signal, obtaining a validity mask (central figure) that determines the samples that have been contaminated by artifacts. Secondly, a least-squares smoothing procedure in the spatial dimension is applied to the raw signal using the not contaminated samples according to the validity mask. The resulting MLSS-processed scanning-EMG signal (right figure) is clean of artifact interference.

  4. A Global Upper-Mantle Tomographic Model of Shear Attenuation

    NASA Astrophysics Data System (ADS)

    Karaoglu, H.; Romanowicz, B. A.

    2016-12-01

    Mapping anelastic 3D structure within the earth's mantle is key to understanding present day mantle dynamics, as it provides complementary constraints to those obtained from elastic structure, with the potential to distinguish between thermal and compositional heterogeneity. For this, we need to measure seismic wave amplitudes, which are sensitive to both elastic (through focusing and scattering) and anelastic structure. The elastic effects are less pronounced at long periods, so previous global upper-mantle attenuation models are based on teleseismic surface wave data, sometimes including overtones. In these studies, elastic effects are considered either indirectly, by eliminating data strongly contaminated by them (e.g. Romanowicz, 1995; Gung and Romanowicz, 2004), or by correcting for elastic focusing effects using an approximate linear approach (Dalton et al., 2008). Additionally, in these studies, the elastic structure is held fixed when inverting for intrinsic attenuation . The importance of (1) having a good starting elastic model, (2) accurate modeling of the seismic wavefield and (3) joint inversion for elastic and anelastic structure, becomes more evident as the targeted resolution level increases. Also, velocity dispersion effects due to anelasticity need to be taken into account. Here, we employ a hybrid full waveform inversion method, inverting jointly for global elastic and anelastic upper mantle structure, starting from the latest global 3D shear velocity model built by our group (French and Romanowicz, 2014), using the spectral element method for the forward waveform modeling (Capdeville et al., 2003), and normal-mode perturbation theory (NACT - Li and Romanowicz, 1995) for kernel computations. We present a 3D upper-mantle anelastic model built by using three component fundamental and overtone surface waveforms down to 60 s as well as long period body waveforms down to 30 s. We also include source and site effects to first order as frequency independent scalar factors. The robustness of the inversion method is assessed through synthetic and resolution tests. We discuss salient features of the resulting anelastic model and in particular the well-resolved strong correlation with tectonics observed in the first 200 km of the mantle.

  5. Analysis and Simulation of 3D Scattering due to Heterogeneous Crustal Structure and Surface Topography on Regional Phases; Magnitude and Discrimination

    DTIC Science & Technology

    2009-07-07

    inversion technique that is based on different weights for relatively high frequency waveform modeling of Pnl and relatively long period surface waves (Tan...et al., 2006). Pnl and surface waves are also allowed to shift in time to take into account of uncertainties in velocity structure. Joint...inversion of Pnl and surface waves provides better constraints on focal depth as well as source mechanisms. The pure strike-slip mechanism of the earthquake

  6. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  7. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE PAGES

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...

    2012-05-01

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreger, Douglas S.; Ford, Sean R.; Walter, William R.

    Research was carried out investigating the feasibility of using a regional distance seismic waveform moment tensor inverse procedure to estimate source parameters of nuclear explosions and to use the source inversion results to develop a source-type discrimination capability. The results of the research indicate that it is possible to robustly determine the seismic moment tensor of nuclear explosions, and when compared to natural seismicity in the context of the a Hudson et al. (1989) source-type diagram they are found to separate from populations of earthquakes and underground cavity collapse seismic sources.

  9. Earthquake Fingerprints: Representing Earthquake Waveforms for Similarity-Based Detection

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Beroza, G. C.

    2016-12-01

    New earthquake detection methods, such as Fingerprint and Similarity Thresholding (FAST), use fast approximate similarity search to identify similar waveforms in long-duration data without templates (Yoon et al. 2015). These methods have two key components: fingerprint extraction and an efficient search algorithm. Fingerprint extraction converts waveforms into fingerprints, compact signatures that represent short-duration waveforms for identification and search. Earthquakes are detected using an efficient indexing and search scheme, such as locality-sensitive hashing, that identifies similar waveforms in a fingerprint database. The quality of the search results, and thus the earthquake detection results, is strongly dependent on the fingerprinting scheme. Fingerprint extraction should map similar earthquake waveforms to similar waveform fingerprints to ensure a high detection rate, even under additive noise and small distortions. Additionally, fingerprints corresponding to noise intervals should have mutually dissimilar fingerprints to minimize false detections. In this work, we compare the performance of multiple fingerprint extraction approaches for the earthquake waveform similarity search problem. We apply existing audio fingerprinting (used in content-based audio identification systems) and time series indexing techniques and present modified versions that are specifically adapted for seismic data. We also explore data-driven fingerprinting approaches that can take advantage of labeled or unlabeled waveform data. For each fingerprinting approach we measure its ability to identify similar waveforms in a low signal-to-noise setting, and quantify the trade-off between true and false detection rates in the presence of persistent noise sources. We compare the performance using known event waveforms from eight independent stations in the Northern California Seismic Network.

  10. Detection of sinkholes or anomalies using full seismic wave fields : phase II.

    DOT National Transportation Integrated Search

    2016-08-01

    A new 2-D Full Waveform Inversion (FWI) software code was developed to characterize layering and anomalies beneath the ground surface using seismic testing. The software is capable of assessing the shear and compression wave velocities (Vs and Vp) fo...

  11. Full waveform inversion of combined towed streamer and limited OBS seismic data: a theoretical study

    NASA Astrophysics Data System (ADS)

    Yang, Huachen; Zhang, Jianzhong

    2018-06-01

    In marine seismic oil exploration, full waveform inversion (FWI) of towed-streamer data is used to reconstruct velocity models. However, the FWI of towed-streamer data easily converges to a local minimum solution due to the lack of low-frequency content. In this paper, we propose a new FWI technique using towed-streamer data, its integrated data sets and limited OBS data. Both integrated towed-streamer seismic data and OBS data have low-frequency components. Therefore, at early iterations in the new FWI technique, the OBS data combined with the integrated towed-streamer data sets reconstruct an appropriate background model. And the towed-streamer seismic data play a major role in later iterations to improve the resolution of the model. The new FWI technique is tested on numerical examples. The results show that when starting models are not accurate enough, the models inverted using the new FWI technique are superior to those inverted using conventional FWI.

  12. Full-waveform Inversion for Localized 3-D S-velocity Structure in D" Beneath the Caribbean using US-Array Data

    NASA Astrophysics Data System (ADS)

    Borgeaud, A. F. E.; Konishi, K.; Kawai, K.; Geller, R. J.

    2015-12-01

    The region beneath Central America is known to have significant lateral velocity heterogeneities from the upper mantle down to the lowermost mantle. It is also known for its long history of subducting oceanic plates and fragmented plate remnants that sunk to the lowermost mantle (e.g., Ren et al., 2007). In this study, we use localized full-waveform inversion to invert for the 3-D S-velocity beneath the Caribbean. We use the DSM (Kawai et al., 2006) to compute 1-D synthetic seismograms and the first-order Born approximation to compute the partial derivatives for 3-D structure. We use a larger dataset with better coverage than Kawai et al. (2014), consisting of S and ScS phases from US-Array data for events in South America. The resulting 3-D model can contribute to understanding whether the cause of the velocity anomalies is thermal, chemical, or due to phase transitions.

  13. Lag compensation of optical fibers or thermocouples to achieve waveform fidelity in dynamic gas pyrometry

    NASA Technical Reports Server (NTRS)

    Warshawsky, I.

    1991-01-01

    Fidelity of waveform reproduction requires constant amplitude ratio and constant time lag of a temperature sensor's indication, at all frequencies of interest. However, heat-transfer type sensors usually cannot satisfy these requirements. Equations for the actual indication of a thermocouple and an optical-fiber pyrometer are given explicitly, in terms of sensor and flowing-gas properties. A practical, realistic design of each type of sensor behaves like a first-order system with amplitude-ratio attenuation inversely proportional to frequency when the frequency exceeds the corner frequency. Only at much higher frequencies does the amplitude-ratio attenuation for the optical fiber sensor become inversely proportional to the square root of the frequency. Design options for improving the frequency response are discussed. On-line electrical lag compensation, using a linear amplifier and a passive compensation network, can extend the corner frequency of the thermocouple 100-fold or more; a similar passive network can be used for the optical-fiber sensor. Design details for these networks are presented.

  14. Subsurface Void Characterization with 3-D Time Domain Full Waveform Tomography.

    NASA Astrophysics Data System (ADS)

    Nguyen, T. D.

    2017-12-01

    A new three dimensional full waveform inversion (3-D FWI) method is presented for subsurface site characterization at engineering scales (less than 30 m in depth). The method is based on a solution of 3-D elastic wave equations for forward modeling, and a cross-adjoint gradient approach for model updating. The staggered-grid finite-difference technique is used to solve the wave equations, together with implementation of the perfectly matched layer condition for boundary truncation. The gradient is calculated from the forward and backward wavefields. Reversed-in-time displacement residuals are induced as multiple sources at all receiver locations for the backward wavefield. The capability of the presented FWI method is tested on both synthetic and field experimental datasets. The test configuration uses 96 receivers and 117 shots at equal spacing (Fig 1). The inversion results from synthetic data show the ability of characterizing variable low- and high-velocity layers with embedded void (Figs 2-3). The synthetic study shows good potential for detection of voids and abnormalities in the field.

  15. Forest Canopy Cover and Height from MISR in Topographically Complex Southwestern US Landscape Assessed with High Quality Reference Data

    NASA Technical Reports Server (NTRS)

    Chopping, Mark; North, Malcolm; Chen, Jiquan; Schaaf, Crystal B.; Blair, J. Bryan; Martonchik, John V.; Bull, Michael A.

    2012-01-01

    This study addresses the retrieval of spatially contiguous canopy cover and height estimates in southwestern USforests via inversion of a geometric-optical (GO) model against surface bidirectional reflectance factor (BRF) estimates from the Multi-angle Imaging SpectroRadiometer (MISR). Model inversion can provide such maps if good estimates of the background bidirectional reflectance distribution function (BRDF) are available. The study area is in the Sierra National Forest in the Sierra Nevada of California. Tree number density, mean crown radius, and fractional cover reference estimates were obtained via analysis of QuickBird 0.6 m spatial resolution panchromatic imagery usingthe CANopy Analysis with Panchromatic Imagery (CANAPI) algorithm, while RH50, RH75 and RH100 (50, 75, and 100 energy return) height data were obtained from the NASA Laser Vegetation Imaging Sensor (LVIS), a full waveform light detection and ranging (lidar) instrument. These canopy parameters were used to drive a modified version of the simple GO model (SGM), accurately reproducing patterns ofMISR 672 nm band surface reflectance (mean RMSE 0.011, mean R2 0.82, N 1048). Cover and height maps were obtained through model inversion against MISR 672 nm reflectance estimates on a 250 m grid.The free parameters were tree number density and mean crown radius. RMSE values with respect to reference data for the cover and height retrievals were 0.05 and 6.65 m, respectively, with of 0.54 and 0.49. MISR can thus provide maps of forest cover and height in areas of topographic variation although refinements are required to improve retrieval precision.

  16. Two collateral problems in the framework of ground-penetrating radar data inversion: influence of the emitted waveform outline and radargram comparison.

    NASA Astrophysics Data System (ADS)

    Oliveira, Rui Jorge; Caldeira, Bento; Borges, José Fernando

    2017-04-01

    Obtain three-dimensional models of the physical properties of buried structures in the subsurface by inversion of GPR data is an appeal to Archaeology and a challenge to Geophysics. Along the research of solutions to resolve this issue stand out two major problems that need to be solved: 1) Establishment the basis of the computation that allows assign numerically in the synthetic radargrams, the physical conditions at which the GPR wave were generated; and 2) automatic comparison of the computed synthetic radargrams with the correspondent observed ones. The influence of the pulse shape in GPR data processing was a studied topic. The pulse outline emitted by GPR antennas was experimentally acquired and this information has been used in the deconvolution operation, carried out by iterative process, similarly the approach used in seismology to obtain the receiver functions. In order to establish the comparison between real and synthetic radargrams, were tested automatic image adjustment algorithms, which search the best fit between two radargramas and quantify their differences through the calculation of Normalized Root Mean Square Deviation (NRMSD). After the implementation of the last tests, the NRMSD between the synthetic and real data is about 19% (initially it was 29%). These procedures are essential to be able to perform an inversion of GPR data obtained in the field. Acknowledgment: This work is co-funded by the European Union through the European Regional Development Fund, included in the COMPETE 2020 (Operational Program Competitiveness and Internationalization) through the ICT project (UID/GEO/04683/2013) with the reference POCI-01-0145-FEDER-007690.

  17. Broadband optical frequency comb generator based on driving N-cascaded modulators by Gaussian-shaped waveform

    NASA Astrophysics Data System (ADS)

    Hmood, Jassim K.; Harun, Sulaiman W.

    2018-05-01

    A new approach for realizing a wideband optical frequency comb (OFC) generator based on driving cascaded modulators by a Gaussian-shaped waveform, is proposed and numerically demonstrated. The setup includes N-cascaded MZMs, a single Gaussian-shaped waveform generator, and N-1 electrical time delayer. The first MZM is driven directly by a Gaussian-shaped waveform, while delayed replicas of the Gaussian-shaped waveform drive the other MZMs. An analytical model that describes the proposed OFC generator is provided to study the effect of number and chirp factor of cascaded MZM as well as pulse width on output spectrum. Optical frequency combs at frequency spacing of 1 GHz are generated by applying Gaussian-shaped waveform at pulse widths ranging from 200 to 400 ps. Our results reveal that, the number of comb lines is inversely proportional to the pulse width and directly proportional to both number and chirp factor of cascaded MZMs. At pulse width of 200 ps and chirp factor of 4, 67 frequency lines can be measured at output spectrum of two-cascaded MZMs setup. Whereas, increasing the number of cascaded stages to 3, 4, and 5, the optical spectra counts 89, 109 and 123 frequency lines; respectively. When the delay time is optimized, 61 comb lines can be achieved with power fluctuations of less than 1 dB for five-cascaded MZMs setup.

  18. A waveform diversity method for optimizing 3-d power depositions generated by ultrasound phased arrays.

    PubMed

    Zeng, Xiaozheng Jenny; Li, Jian; McGough, Robert J

    2010-01-01

    A waveform-diversity-based approach for 3-D tumor heating is compared to spot scanning for hyperthermia applications. The waveform diversity method determines the excitation signals applied to the phased array elements and produces a beam pattern that closely matches the desired power distribution. The optimization algorithm solves the covariance matrix of the excitation signals through semidefinite programming subject to a series of quadratic cost functions and constraints on the control points. A numerical example simulates a 1444-element spherical-section phased array that delivers heat to a 3-cm-diameter spherical tumor located 12 cm from the array aperture, and the results show that waveform diversity combined with mode scanning increases the heated volume within the tumor while simultaneously decreasing normal tissue heating. Whereas standard single focus and multiple focus methods are often associated with unwanted intervening tissue heating, the waveform diversity method combined with mode scanning shifts energy away from intervening tissues where hotspots otherwise accumulate to improve temperature localization in deep-seated tumors.

  19. Categorisation of full waveform data provided by laser scanning devices

    NASA Astrophysics Data System (ADS)

    Ullrich, Andreas; Pfennigbauer, Martin

    2011-11-01

    In 2004, a laser scanner device for commercial airborne laser scanning applications, the RIEGL LMS-Q560, was introduced to the market, making use of a radical alternative approach to the traditional analogue signal detection and processing schemes found in LIDAR instruments so far: digitizing the echo signals received by the instrument for every laser pulse and analysing these echo signals off-line in a so-called full waveform analysis in order to retrieve almost all information contained in the echo signal using transparent algorithms adaptable to specific applications. In the field of laser scanning the somewhat unspecific term "full waveform data" has since been established. We attempt a categorisation of the different types of the full waveform data found in the market. We discuss the challenges in echo digitization and waveform analysis from an instrument designer's point of view and we will address the benefits to be gained by using this technique, especially with respect to the so-called multi-target capability of pulsed time-of-flight LIDAR instruments.

  20. Lithospheric structure of the Western Alps as seen by full-waveform inversion of CIFALPS teleseismic data

    NASA Astrophysics Data System (ADS)

    Beller, Stephen; Monteiller, Vadim; Operto, Stéphane; Nolet, Guust; Paul, Anne; Zhao, Liang

    2017-04-01

    Full-waveform inversion (FWI) is a powerful but constitutionally intensive technique that aims to recover 3D multiparameter images of the subsurface by minimising the waveform difference between the full recorded and modelled seismograms. This method has recently been adapted and successfully applied in lithospheric settings by tackling teleseismic waveform modelling with hybrid methods. For each event, a global scale simulation is performed once and for all to store the wavefield solutions on the edges of the lithospheric target. Then, for each modelling involved in the FWI process, these global scale solutions are injected within the lithospheric medium from the boundaries. We present the results of the application of teleseismic FWI to the data acquired by the CIFALPS experiment that was conducted in the Western Alps to gain new insights its lithospheric structure and geodynamic evolution of the alpine range. Nine teleseismic events were inverted to infer 3D models of density, P-wave velocity and S-wave velocity of the crust and the upper-mantle down to 200 km depth. Our models show clear evidences of continental subduction during the alpine orogeny. They outline a dipping European Moho down to 75 km depth and finely delineate the geometry of the Ivrea body at the suture between European and Adriatic plates. Deeper, in the mantle a slow S-wave velocity anomaly might indicate the location of the European slab detachment. Overall, FWI models give access to new seismic images that fill the resolution gap between smooth tomographic model and sharp receiver function images of the lithosphere and enable integrated interpretations of crustal and upper-mantle structures.

  1. Active backstop faults in the Mentawai region of Sumatra, Indonesia, revealed by teleseismic broadband waveform modeling

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Bradley, Kyle Edward; Wei, Shengji; Wu, Wenbo

    2018-02-01

    Two earthquake sequences that affected the Mentawai islands offshore of central Sumatra in 2005 (Mw 6.9) and 2009 (Mw 6.7) have been highlighted as evidence for active backthrusting of the Sumatran accretionary wedge. However, the geometry of the activated fault planes is not well resolved due to large uncertainties in the locations of the mainshocks and aftershocks. We refine the locations and focal mechanisms of medium size events (Mw > 4.5) of these two earthquake sequences through broadband waveform modeling. In addition to modeling the depth-phases for accurate centroid depths, we use teleseismic surface wave cross-correlation to precisely relocate the relative horizontal locations of the earthquakes. The refined catalog shows that the 2005 and 2009 "backthrust" sequences in Mentawai region actually occurred on steeply (∼60 degrees) landward-dipping faults (Masilo Fault Zone) that intersect the Sunda megathrust beneath the deepest part of the forearc basin, contradicting previous studies that inferred slip on a shallowly seaward-dipping backthrust. Static slip inversion on the newly-proposed fault fits the coseismic GPS offsets for the 2009 mainshock equally well as previous studies, but with a slip distribution more consistent with the mainshock centroid depth (∼20 km) constrained from teleseismic waveform inversion. Rupture of such steeply dipping reverse faults within the forearc crust is rare along the Sumatra-Java margin. We interpret these earthquakes as 'unsticking' of the Sumatran accretionary wedge along a backstop fault separating imbricated material from the stronger Sunda lithosphere. Alternatively, the reverse faults may have originated as pre-Miocene normal faults of the extended continental crust of the western Sunda margin. Our waveform modeling approach can be used to further refine global earthquake catalogs in order to clarify the geometries of active faults.

  2. Adjoint tomography and centroid-moment tensor inversion of the Kanto region, Japan

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.

    2017-12-01

    A three-dimensional seismic wave speed model in the Kanto region of Japan was developed using adjoint tomography based on large computing. Starting with a model based on previous travel time tomographic results, we inverted the waveforms obtained at seismic broadband stations from 140 local earthquakes in the Kanto region to obtain the P- and S-wave speeds Vp and Vs. The synthetic displacements were calculated using the spectral element method (SEM; e.g. Komatitsch and Tromp 1999; Peter et al. 2011) in which the Kanto region was parameterized using 16 million grid points. The model parameters Vp and Vs were updated iteratively by Newton's method using the misfit and Hessian kernels until the misfit between the observed and synthetic waveforms was minimized. The proposed model reveals several anomalous areas with extremely low Vs values in comparison with those of the initial model. The synthetic waveforms obtained using the newly proposed model for the selected earthquakes show better fit than the initial model to the observed waveforms in different period ranges within 5-30 s. In the present study, all centroid times of the source solutions were determined using time shifts based on cross correlation to prevent high computing resources before the structural inversion. Additionally, parameters of centroid-moment solutions were fully determined using the SEM assuming the 3D structure (e.g. Liu et al. 2004). As a preliminary result, new solutions were basically same as their initial solutions. This may indicate that the 3D structure is not effective for the source estimation. Acknowledgements: This study was supported by JSPS KAKENHI Grant Number 16K21699.

  3. Semiautomated tremor detection using a combined cross-correlation and neural network approach

    USGS Publications Warehouse

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2013-01-01

    Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low‒amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross‒correlation technique, followed by a Self‒Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being “semiautomated”. We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal‒to‒noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal‒to‒noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.

  4. Semiautomated tremor detection using a combined cross-correlation and neural network approach

    NASA Astrophysics Data System (ADS)

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2013-09-01

    Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low-amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross-correlation technique, followed by a Self-Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being "semiautomated". We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal-to-noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal-to-noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.

  5. Gaussian mixture model based identification of arterial wall movement for computation of distension waveform.

    PubMed

    Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram

    2015-01-01

    This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.

  6. Full-waveform data for building roof step edge localization

    NASA Astrophysics Data System (ADS)

    Słota, Małgorzata

    2015-08-01

    Airborne laser scanning data perfectly represent flat or gently sloped areas; to date, however, accurate breakline detection is the main drawback of this technique. This issue becomes particularly important in the case of modeling buildings, where accuracy higher than the footprint size is often required. This article covers several issues related to full-waveform data registered on building step edges. First, the full-waveform data simulator was developed and presented in this paper. Second, this article provides a full description of the changes in echo amplitude, echo width and returned power caused by the presence of edges within the laser footprint. Additionally, two important properties of step edge echoes, peak shift and echo asymmetry, were noted and described. It was shown that these properties lead to incorrect echo positioning along the laser center line and can significantly reduce the edge points' accuracy. For these reasons and because all points are aligned with the center of the beam, regardless of the actual target position within the beam footprint, we can state that step edge points require geometric corrections. This article presents a novel algorithm for the refinement of step edge points. The main distinguishing advantage of the developed algorithm is the fact that none of the additional data, such as emitted signal parameters, beam divergence, approximate edge geometry or scanning settings, are required. The proposed algorithm works only on georeferenced profiles of reflected laser energy. Another major advantage is the simplicity of the calculation, allowing for very efficient data processing. Additionally, the developed method of point correction allows for the accurate determination of points lying on edges and edge point densification. For this reason, fully automatic localization of building roof step edges based on LiDAR full-waveform data with higher accuracy than the size of the lidar footprint is feasible.

  7. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.

  8. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    NASA Astrophysics Data System (ADS)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  9. The effects of core-reflected waves on finite fault inversions with teleseismic body wave data

    NASA Astrophysics Data System (ADS)

    Qian, Yunyi; Ni, Sidao; Wei, Shengji; Almeida, Rafael; Zhang, Han

    2017-11-01

    Teleseismic body waves are essential for imaging rupture processes of large earthquakes. Earthquake source parameters are usually characterized by waveform analyses such as finite fault inversions using only turning (direct) P and SH waves without considering the reflected phases from the core-mantle boundary (CMB). However, core-reflected waves such as ScS usually have amplitudes comparable to direct S waves due to the total reflection from the CMB and might interfere with the S waves used for inversion, especially at large epicentral distances for long duration earthquakes. In order to understand how core-reflected waves affect teleseismic body wave inversion results, we develop a procedure named Multitel3 to compute Green's functions that contain turning waves (direct P, pP, sP, direct S, sS and reverberations in the crust) and core-reflected waves (PcP, pPcP, sPcP, ScS, sScS and associated reflected phases from the CMB). This ray-based method can efficiently generate synthetic seismograms for turning and core-reflected waves independently, with the flexibility to take into account the 3-D Earth structure effect on the timing between these phases. The performance of this approach is assessed through a series of numerical inversion tests on synthetic waveforms of the 2008 Mw7.9 Wenchuan earthquake and the 2015 Mw7.8 Nepal earthquake. We also compare this improved method with the turning-wave only inversions and explore the stability of the new procedure when there are uncertainties in a priori information (such as fault geometry and epicentre location) or arrival time of core-reflected phases. Finally, a finite fault inversion of the 2005 Mw8.7 Nias-Simeulue earthquake is carried out using the improved Green's functions. Using enhanced Green's functions yields better inversion results as expected. While the finite source inversion with conventional P and SH waves is able to recover large-scale characteristics of the earthquake source, by adding PcP and ScS phases, the inverted slip model and moment rate function better match previous results incorporating field observations, geodetic and seismic data.

  10. Three-dimensional NDE of VHTR core components via simulation-based testing. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzina, Bojan; Kunerth, Dennis

    2014-09-30

    A next generation, simulation-driven-and-enabled testing platform is developed for the 3D detection and characterization of defects and damage in nuclear graphite and composite structures in Very High Temperature Reactors (VHTRs). The proposed work addresses the critical need for the development of high-fidelity Non-Destructive Examination (NDE) technologies for as-manufactured and replaceable in-service VHTR components. Centered around the novel use of elastic (sonic and ultrasonic) waves, this project deploys a robust, non-iterative inverse solution for the 3D defect reconstruction together with a non-contact, laser-based approach to the measurement of experimental waveforms in VHTR core components. In particular, this research (1) deploys three-dimensionalmore » Scanning Laser Doppler Vibrometry (3D SLDV) as a means to accurately and remotely measure 3D displacement waveforms over the accessible surface of a VHTR core component excited by mechanical vibratory source; (2) implements a powerful new inverse technique, based on the concept of Topological Sensitivity (TS), for non-iterative elastic waveform tomography of internal defects - that permits robust 3D detection, reconstruction and characterization of discrete damage (e.g. holes and fractures) in nuclear graphite from limited-aperture NDE measurements; (3) implements state-of-the art computational (finite element) model that caters for accurately simulating elastic wave propagation in 3D blocks of nuclear graphite; (4) integrates the SLDV testing methodology with the TS imaging algorithm into a non-contact, high-fidelity NDE platform for the 3D reconstruction and characterization of defects and damage in VHTR core components; and (5) applies the proposed methodology to VHTR core component samples (both two- and three-dimensional) with a priori induced, discrete damage in the form of holes and fractures. Overall, the newly established SLDV-TS testing platform represents a next-generation NDE tool that surpasses all existing techniques for the 3D ultrasonic imaging of material damage from non-contact, limited-aperture waveform measurements. Outlook. The next stage in the development of this technology includes items such as (a) non-contact generation of mechanical vibrations in VHTR components via thermal expansion created by high-intensity laser; (b) development and incorporation of Synthetic Aperture Focusing Technique (SAFT) for elevating the accuracy of 3D imaging in highly noisy environments with minimal accessible surface; (c) further analytical and computational developments to facilitate the reconstruction of diffuse damage (e.g. microcracks) in nuclear graphite as they lead to the dispersion of elastic waves, (d) concept of model updating for accurate tracking of the evolution of material damage via periodic inspections; (d) adoption of the Bayesian framework to obtain information on the certainty of obtained images; and (e) optimization of the computational scheme toward real-time, model-based imaging of damage in VHTR core components.« less

  11. Interstation phase speed and amplitude measurements of surface waves with nonlinear waveform fitting: application to USArray

    NASA Astrophysics Data System (ADS)

    Hamada, K.; Yoshizawa, K.

    2015-09-01

    A new method of fully nonlinear waveform fitting to measure interstation phase speeds and amplitude ratios is developed and applied to USArray. The Neighbourhood Algorithm is used as a global optimizer, which efficiently searches for model parameters that fit two observed waveforms on a common great-circle path by modulating the phase and amplitude terms of the fundamental-mode surface waves. We introduce the reliability parameter that represents how well the waveforms at two stations can be fitted in a time-frequency domain, which is used as a data selection criterion. The method is applied to observed waveforms of USArray for seismic events in the period from 2007 to 2010 with moment magnitude greater than 6.0. We collect a large number of phase speed data (about 75 000 for Rayleigh and 20 000 for Love) and amplitude ratio data (about 15 000 for Rayleigh waves) in a period range from 30 to 130 s. The majority of the interstation distances of measured dispersion data is less than 1000 km, which is much shorter than the typical average path-length of the conventional single-station measurements for source-receiver pairs. The phase speed models for Rayleigh and Love waves show good correlations on large scales with the recent tomographic maps derived from different approaches for phase speed mapping; for example, significant slow anomalies in volcanic regions in the western Unites States and fast anomalies in the cratonic region. Local-scale phase speed anomalies corresponding to the major tectonic features in the western United States, such as Snake River Plains, Basin and Range, Colorado Plateau and Rio Grande Rift have also been identified clearly in the phase speed models. The short-path information derived from our interstation measurements helps to increase the achievable horizontal resolution. We have also performed joint inversions for phase speed maps using the measured phase and amplitude ratio data of vertical component Rayleigh waves. These maps exhibit better recovery of phase speed perturbations, particularly where the strong lateral velocity gradient exists in which the effects of elastic focussing can be significant; that is, the Yellowstone hotspot, Snake River Plains, and Rio Grande Rift. The enhanced resolution of the phase speed models derived from the interstation phase and amplitude measurements will be of use for the better seismological constraint on the lithospheric structure, in combination with dense broad-band seismic arrays.

  12. Multiparameter elastic full waveform inversion with facies-based constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-06-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize FWI beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a priori information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  13. Algorithmic processing of intrinsic signals in affixed transmission speckle analysis (ATSA) (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ghijsen, Michael T.; Tromberg, Bruce J.

    2017-03-01

    Affixed Transmission Speckle Analysis (ATSA) is a method recently developed to measure blood flow that is based on laser speckle imaging miniaturized into a clip-on form factor the size of a pulse-oximeter. Measuring at a rate of 250 Hz, ATSA is capable or obtaining the cardiac waveform in blood flow data, referred to as the Speckle-Plethysmogram (SPG). ATSA is also capable of simultaneously measuring the Photoplethysmogram (PPG), a more conventional signal related to light intensity. In this work we present several novel algorithms for extracting physiologically relevant information from the combined SPG-PPG waveform data. First we show that there is a slight time-delay between the SPG and PPG that can be extracted computationally. Second, we present a set of frequency domain algorithms that measure harmonic content on pulse-by-pulse basis for both the SPG and PPG. Finally, we apply these algorithms to data obtained from a set of subjects including healthy controls and individuals with heightened cardiovascular risk. We hypothesize that the time-delay and frequency content are correlated with cardiovascular health; specifically with vascular stiffening.

  14. Ultra-low velocity zones beneath the Philippine and Tasman Seas revealed by a trans-dimensional Bayesian waveform inversion

    NASA Astrophysics Data System (ADS)

    Pachhai, Surya; Dettmer, Jan; Tkalčić, Hrvoje

    2015-11-01

    Ultra-low velocity zones (ULVZs) are small-scale structures in the Earth's lowermost mantle inferred from the analysis of seismological observations. These structures exhibit a strong decrease in compressional (P)-wave velocity, shear (S)-wave velocity, and an increase in density. Quantifying the elastic properties of ULVZs is crucial for understanding their physical origin, which has been hypothesized either as partial melting, iron enrichment, or a combination of the two. Possible disambiguation of these hypotheses can lead to a better understanding of the dynamic processes of the lowermost mantle, such as, percolation, stirring and thermochemical convection. To date, ULVZs have been predominantly studied by forward waveform modelling of seismic waves that sample the core-mantle boundary region. However, ULVZ parameters (i.e. velocity, density, and vertical and lateral extent) obtained through forward modelling are poorly constrained because inferring Earth structure from seismic observations is a non-linear inverse problem with inherent non-uniqueness. To address these issues, we developed a trans-dimensional hierarchical Bayesian inversion that enables rigorous estimation of ULVZ parameter values and their uncertainties, including the effects of model selection. The model selection includes treating the number of layers and the vertical extent of the ULVZ as unknowns. The posterior probability density (solution to the inverse problem) of the ULVZ parameters is estimated by reversible jump Markov chain Monte Carlo sampling that employs parallel tempering to improve efficiency/convergence. First, we apply our method to study the resolution of complex ULVZ structure (including gradually varying structure) by probabilistically inverting simulated noisy waveforms. Then, two data sets sampling the CMB beneath the Philippine and Tasman Seas are considered in the inversion. Our results indicate that both ULVZs are more complex than previously suggested. For the Philippine Sea data, we find a strong decrease in S-wave velocity, which indicates the presence of iron-rich material, albeit this result is accompanied with larger parameter uncertainties than in a previous study. For the Tasman Sea data, our analysis yields a well-constrained S-wave velocity that gradually decreases with depth. We conclude that this ULVZ represents a partial melt of iron-enriched material with higher melt content near its bottom.

  15. Gamma ray spectroscopy employing divalent europium-doped alkaline earth halides and digital readout for accurate histogramming

    DOEpatents

    Cherepy, Nerine Jane; Payne, Stephen Anthony; Drury, Owen B.; Sturm, Benjamin W.

    2016-02-09

    According to one embodiment, a scintillator radiation detector system includes a scintillator, and a processing device for processing pulse traces corresponding to light pulses from the scintillator, where the processing device is configured to: process each pulse trace over at least two temporal windows and to use pulse digitization to improve energy resolution of the system. According to another embodiment, a scintillator radiation detector system includes a processing device configured to: fit digitized scintillation waveforms to an algorithm, perform a direct integration of fit parameters, process multiple integration windows for each digitized scintillation waveform to determine a correction factor, and apply the correction factor to each digitized scintillation waveform.

  16. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  17. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  18. Lower Crstal Reflectity bands and Magma Emplacement in Norweigian sea, NE Atlantic

    NASA Astrophysics Data System (ADS)

    Rai, A.; Breivik, A. J.; Mjelde, R.

    2013-12-01

    In this study we present the OBS data collected along seismic profiles in the norweigian sea. The traveltime modelling of the OBS data provides first-hand information about seismic structure of the subsurface. However, waveform modelling is used to further constrain the fine scale structure, velocity constrast and velocity gradients. By forward modelling and inversion of the seismic waveforms, we show that the multiple bands of reflectivity could be due to multiple episodes of magma emplacements that might have frozen in the form of sills. These mafic intrusions probably intruded into the ductile lower crust during the main rifting phase of Europe and Greenland.

  19. Source mechanism of very-long-period signals accompanying dome growth activity at Merapi volcano, Indonesia

    USGS Publications Warehouse

    Hidayat, D.; Chouet, B.; Voight, B.; Dawson, P.; Ratdomopurbo, Antonius

    2002-01-01

    Very-long-period (VLP) pulses with period of 6-7s, displaying similar waveforms, were identified in 1998 from broadband seismographs around the summit crater. These pulses accompanied most of multiphase (MP) earthquakes, a type of long-period event locally defined at Merapi Volcano. Source mechanisms for several VLP pulses were examined by applying moment tensor inversion to the waveform data. Solutions were consistent with a crack striking ???70?? and dipping ???50?? SW, 100m under the active dome, suggest pressurized gas transport involving accumulation and sudden release of 10-60 m3 of gas in the crack over a 6s interval.

  20. Mach-Zehnder interferometry method for acoustic shock wave measurements in air and broadband calibration of microphones.

    PubMed

    Yuldashev, Petr; Karzova, Maria; Khokhlova, Vera; Ollivier, Sébastien; Blanc-Benon, Philippe

    2015-06-01

    A Mach-Zehnder interferometer is used to measure spherically diverging N-waves in homogeneous air. An electrical spark source is used to generate high-amplitude (1800 Pa at 15 cm from the source) and short duration (50 μs) N-waves. Pressure waveforms are reconstructed from optical phase signals using an Abel-type inversion. It is shown that the interferometric method allows one to reach 0.4 μs of time resolution, which is 6 times better than the time resolution of a 1/8-in. condenser microphone (2.5 μs). Numerical modeling is used to validate the waveform reconstruction method. The waveform reconstruction method provides an error of less than 2% with respect to amplitude in the given experimental conditions. Optical measurement is used as a reference to calibrate a 1/8-in. condenser microphone. The frequency response function of the microphone is obtained by comparing the spectra of the waveforms resulting from optical and acoustical measurements. The optically measured pressure waveforms filtered with the microphone frequency response are in good agreement with the microphone output voltage. Therefore, an optical measurement method based on the Mach-Zehnder interferometer is a reliable tool to accurately characterize evolution of weak shock waves in air and to calibrate broadband acoustical microphones.

  1. Visual motion direction is represented in population-level neural response as measured by magnetoencephalography.

    PubMed

    Kaneoke, Y; Urakawa, T; Kakigi, R

    2009-05-19

    We investigated whether direction information is represented in the population-level neural response evoked by the visual motion stimulus, as measured by magnetoencephalography. Coherent motions with varied speed, varied direction, and different coherence level were presented using random dot kinematography. Peak latency of responses to motion onset was inversely related to speed in all directions, as previously reported, but no significant effect of direction on latency changes was identified. Mutual information entropy (IE) calculated using four-direction response data increased significantly (>2.14) after motion onset in 41.3% of response data and maximum IE was distributed at approximately 20 ms after peak response latency. When response waveforms showing significant differences (by multivariate discriminant analysis) in distribution of the three waveform parameters (peak amplitude, peak latency, and 75% waveform width) with stimulus directions were analyzed, 87 waveform stimulus directions (80.6%) were correctly estimated using these parameters. Correct estimation rate was unaffected by stimulus speed, but was affected by coherence level, even though both speed and coherence affected response amplitude similarly. Our results indicate that speed and direction of stimulus motion are represented in the distinct properties of a response waveform, suggesting that the human brain processes speed and direction separately, at least in part.

  2. The spatial sensitivity of Sp converted waves—scattered-wave kernels and their applications to receiver-function migration and inversion

    NASA Astrophysics Data System (ADS)

    Mancinelli, N. J.; Fischer, K. M.

    2018-03-01

    We characterize the spatial sensitivity of Sp converted waves to improve constraints on lateral variations in uppermost-mantle velocity gradients, such as the lithosphere-asthenosphere boundary (LAB) and the mid-lithospheric discontinuities. We use SPECFEM2D to generate 2-D scattering kernels that relate perturbations from an elastic half-space to Sp waveforms. We then show that these kernels can be well approximated using ray theory, and develop an approach to calculating kernels for layered background models. As proof of concept, we show that lateral variations in uppermost-mantle discontinuity structure are retrieved by implementing these scattering kernels in the first iteration of a conjugate-directions inversion algorithm. We evaluate the performance of this technique on synthetic seismograms computed for 2-D models with undulations on the LAB of varying amplitude, wavelength and depth. The technique reliably images the position of discontinuities with dips <35° and horizontal wavelengths >100-200 km. In cases of mild topography on a shallow LAB, the relative brightness of the LAB and Moho converters approximately agrees with the ratio of velocity contrasts across the discontinuities. Amplitude retrieval degrades at deeper depths. For dominant periods of 4 s, the minimum station spacing required to produce unaliased results is 5 km, but the application of a Gaussian filter can improve discontinuity imaging where station spacing is greater.

  3. Comparison of magmatic and amagmatic rift zone kinematics using full moment tensor inversions of regional earthquakes

    NASA Astrophysics Data System (ADS)

    Jaye Oliva, Sarah; Ebinger, Cynthia; Shillington, Donna; Albaric, Julie; Deschamps, Anne; Keir, Derek; Drooff, Connor

    2017-04-01

    Temporary seismic networks deployed in the magmatic Eastern rift and the mostly amagmatic Western rift in East Africa present the opportunity to compare the depth distribution of strain, and fault kinematics in light of rift age and the presence or absence of surface magmatism. The largest events in local earthquake catalogs (ML > 3.5) are modeled using the Dreger and Ford full moment tensor algorithm (Dreger, 2003; Minson & Dreger, 2008) to better constrain source depth and to investigate non-double-couple components. A bandpass filter of 0.02 to 0.10 Hz is applied to the waveforms prior to inversion. Synthetics are based on 1D velocity models derived during seismic analysis and constrained by reflection and tomographic data where available. Results show significant compensated linear vector dipole (CLVD) and isotropic components for earthquakes in magmatic rift zones, whereas double-couple mechanisms predominate in weakly magmatic rift sectors. We interpret the isotropic components as evidence for fluid-involved faulting in the Eastern rift where volatile emissions are large, and dike intrusions well documented. Lower crustal earthquakes are found in both amagmatic and magmatic sectors. These results are discussed in the context of the growing database of complementary geophysical, geochemical, and geological studies in these regions as we seek to understand the role of magmatism and faulting in accommodating strain during early continental rifting.

  4. Deciphering the role of fluids in early stage rifting from full moment tensor inversion of East African earthquakes

    NASA Astrophysics Data System (ADS)

    Oliva, S. J. C.; Ebinger, C. J.; Keir, D.; Shillington, D. J.; Chindandali, P. R. N.

    2016-12-01

    The East African Rift splits around the Archaean Tanzania craton into the magmatic Eastern branch and the mostly amagmatic Western branch, which continues south of the craton. Temporary seismic networks recently deployed in three rift sectors allow for comparison and insights into the early stages of rifting, including areas with lower crustal earthquakes. We analyze earthquakes with ML > 3.5 in the area recorded by CRAFTI (northern Tanzania/Kenya), TANGA (Tanganyika rift), and/or SEGMeNT (Malawi rift) networks. For events not well enclosed by these arrays, nearby permanent stations are used to improve azimuthal coverage when possible. We present source mechanisms as well as better-constrained source depth estimates from moment tensor inversion using Dreger and Ford TDMT algorithm (Dreger, 2003; Minson & Dreger, 2008). Data and synthetic waveforms are bandpass filtered between 0.02 to 0.10 Hz, or a narrower frequency band within this range, depending on lake noise, which can interfere strongly on the lower end of this frequency range. Results suggest local stress reorientations as well as significant dilatation components on some events within magmatic rift sectors. The implications of these results for crustal rheology and magmatic modification will be discussed in light of the growing complementary data sets from the three projects to inform our understanding of early rifting as a whole.

  5. Generator localization by current source density (CSD): Implications of volume conduction and field closure at intracranial and scalp resolutions

    PubMed Central

    Tenke, Craig E.; Kayser, Jürgen

    2012-01-01

    The topographic ambiguity and reference-dependency that has plagued EEG/ERP research throughout its history are largely attributable to volume conduction, which may be concisely described by a vector form of Ohm’s Law. This biophysical relationship is common to popular algorithms that infer neuronal generators via inverse solutions. It may be further simplified as Poisson’s source equation, which identifies underlying current generators from estimates of the second spatial derivative of the field potential (Laplacian transformation). Intracranial current source density (CSD) studies have dissected the “cortical dipole” into intracortical sources and sinks, corresponding to physiologically-meaningful patterns of neuronal activity at a sublaminar resolution, much of which is locally cancelled (i.e., closed field). By virtue of the macroscopic scale of the scalp-recorded EEG, a surface Laplacian reflects the radial projections of these underlying currents, representing a unique, unambiguous measure of neuronal activity at scalp. Although the surface Laplacian requires minimal assumptions compared to complex, model-sensitive inverses, the resulting waveform topographies faithfully summarize and simplify essential constraints that must be placed on putative generators of a scalp potential topography, even if they arise from deep or partially-closed fields. CSD methods thereby provide a global empirical and biophysical context for generator localization, spanning scales from intracortical to scalp recordings. PMID:22796039

  6. Spike detection, characterization, and discrimination using feature analysis software written in LabVIEW.

    PubMed

    Stewart, C M; Newlands, S D; Perachio, A A

    2004-12-01

    Rapid and accurate discrimination of single units from extracellular recordings is a fundamental process for the analysis and interpretation of electrophysiological recordings. We present an algorithm that performs detection, characterization, discrimination, and analysis of action potentials from extracellular recording sessions. The program was entirely written in LabVIEW (National Instruments), and requires no external hardware devices or a priori information about action potential shapes. Waveform events are detected by scanning the digital record for voltages that exceed a user-adjustable trigger. Detected events are characterized to determine nine different time and voltage levels for each event. Various algebraic combinations of these waveform features are used as axis choices for 2-D Cartesian plots of events. The user selects axis choices that generate distinct clusters. Multiple clusters may be defined as action potentials by manually generating boundaries of arbitrary shape. Events defined as action potentials are validated by visual inspection of overlain waveforms. Stimulus-response relationships may be identified by selecting any recorded channel for comparison to continuous and average cycle histograms of binned unit data. The algorithm includes novel aspects of feature analysis and acquisition, including higher acquisition rates for electrophysiological data compared to other channels. The program confirms that electrophysiological data may be discriminated with high-speed and efficiency using algebraic combinations of waveform features derived from high-speed digital records.

  7. Optimization of multi-color laser waveform for high-order harmonic generation

    NASA Astrophysics Data System (ADS)

    Jin, Cheng; Lin, C. D.

    2016-09-01

    With the development of laser technologies, multi-color light-field synthesis with complete amplitude and phase control would make it possible to generate arbitrary optical waveforms. A practical optimization algorithm is needed to generate such a waveform in order to control strong-field processes. We review some recent theoretical works of the optimization of amplitudes and phases of multi-color lasers to modify the single-atom high-order harmonic generation based on genetic algorithm. By choosing different fitness criteria, we demonstrate that: (i) harmonic yields can be enhanced by 10 to 100 times, (ii) harmonic cutoff energy can be substantially extended, (iii) specific harmonic orders can be selectively enhanced, and (iv) single attosecond pulses can be efficiently generated. The possibility of optimizing macroscopic conditions for the improved phase matching and low divergence of high harmonics is also discussed. The waveform control and optimization are expected to be new drivers for the next wave of breakthrough in the strong-field physics in the coming years. Project supported by the Fundamental Research Funds for the Central Universities of China (Grant No. 30916011207), Chemical Sciences, Geosciences and Biosciences Division, Office of Basic Energy Sciences, Office of Science, U. S. Department of Energy (Grant No. DE-FG02-86ER13491), and Air Force Office of Scientific Research, USA (Grant No. FA9550-14-1-0255).

  8. Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter

    PubMed Central

    Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.

    2016-01-01

    Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549

  9. Using 3D Simulation of Elastic Wave Propagation in Laplace Domain for Electromagnetic-Seismic Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Petrov, P.; Newman, G. A.

    2010-12-01

    Quantitative imaging of the subsurface objects is essential part of modern geophysical technology important in oil and gas exploration and wide-range engineering applications. A significant advancement in developing a robust, high resolution imaging technology is concerned with using the different geophysical measurements (gravity, EM and seismic) sense the subsurface structure. A joint image of the subsurface geophysical attributes (velocity, electrical conductivity and density) requires the consistent treatment of the different geophysical data (electromagnetic and seismic) due to their differing physical nature - diffusive and attenuated propagation of electromagnetic energy and nonlinear, multiple scattering wave propagation of seismic energy. Recent progress has been reported in the solution of this problem by reducing the complexity of seismic wave field. Works formed by Shin and Cha (2009 and 2008) suggests that low-pass filtering the seismic trace via Laplace-Fourier transformation can be an effective approach for obtaining seismic data that has similar spatial resolution to EM data. The effect of Laplace- Fourier transformation on the low-pass filtered trace changes the modeling of the seismic wave field from multi-wave propagation to diffusion. The key benefit of transformation is that diffusive wave-field inversion works well for both data sets seismic (Shin and Cha, 2008) and electromagnetic (Commer and Newman 2008, Newman et al., 2010). Moreover the different data sets can also be matched for similar and consistent resolution. Finally, the low pass seismic image is also an excellent choice for a starting model when analyzing the entire seismic waveform to recover the high spatial frequency components of the seismic image; its reflectivity (Shin and Cha, 2009). Without a good starting model full waveform seismic imaging and migration can encounter serious difficulties. To produce seismic wave fields consistent for joint imaging in the Laplace-Fourier domain we had developed 3D code for full-wave field simulation in the elastic media which take into account nonlinearity introduced by free-surface effects. Our approach is based on the velocity-stress formulation. In the contrast to conventional formulation we defined the material properties such as density and Lame constants not at nodal points but within cells. This second order finite differences method formulated in the cell-based grid, generate numerical solutions compatible with analytical ones within the range errors determinate by dispersion analysis. Our simulator will be embedded in an inversion scheme for joint seismic- electromagnetic imaging. It also offers possibilities for preconditioning the seismic wave propagation problems in the frequency domain. References. Shin, C. & Cha, Y. (2009), Waveform inversion in the Laplace-Fourier domain, Geophys. J. Int. 177(3), 1067- 1079. Shin, C. & Cha, Y. H. (2008), Waveform inversion in the Laplace domain, Geophys. J. Int. 173(3), 922-931. Commer, M. & Newman, G. (2008), New advances in three-dimensional controlled-source electromagnetic inversion, Geophys. J. Int. 172(2), 513-535. Newman, G. A., Commer, M. & Carazzone, J. J. (2010), Imaging CSEM data in the presence of electrical anisotropy, Geophysics, in press.

  10. A satellite-based radar wind sensor

    NASA Technical Reports Server (NTRS)

    Xin, Weizhuang

    1991-01-01

    The objective is to investigate the application of Doppler radar systems for global wind measurement. A model of the satellite-based radar wind sounder (RAWS) is discussed, and many critical problems in the designing process, such as the antenna scan pattern, tracking the Doppler shift caused by satellite motion, and backscattering of radar signals from different types of clouds, are discussed along with their computer simulations. In addition, algorithms for measuring mean frequency of radar echoes, such as the Fast Fourier Transform (FFT) estimator, the covariance estimator, and the estimators based on autoregressive models, are discussed. Monte Carlo computer simulations were used to compare the performance of these algorithms. Anti-alias methods are discussed for the FFT and the autoregressive methods. Several algorithms for reducing radar ambiguity were studied, such as random phase coding methods and staggered pulse repitition frequncy (PRF) methods. Computer simulations showed that these methods are not applicable to the RAWS because of the broad spectral widths of the radar echoes from clouds. A waveform modulation method using the concept of spread spectrum and correlation detection was developed to solve the radar ambiguity. Radar ambiguity functions were used to analyze the effective signal-to-noise ratios for the waveform modulation method. The results showed that, with suitable bandwidth product and modulation of the waveform, this method can achieve the desired maximum range and maximum frequency of the radar system.

  11. Solving constrained inverse problems for waveform tomography with Salvus

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Afanasiev, M.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.

    2016-12-01

    Finding a good balance between flexibility and performance is often difficult within domain-specific software projects. To achieve this balance, we introduce Salvus: an open-source high-order finite element package built upon PETSc and Eigen, that focuses on large-scale full-waveform modeling and inversion. One of the key features of Salvus is its modular design, based on C++ mixins, that separates the physical equations from the numerical discretization and the mathematical optimization. In this presentation we focus on solving inverse problems with Salvus and discuss (i) dealing with inexact derivatives resulting, e.g., from lossy wavefield compression, (ii) imposing additional constraints on the model parameters, e.g., from effective medium theory, and (iii) integration with a workflow management tool. We present a feasible-point trust-region method for PDE-constrained inverse problems that can handle inexactly computed derivatives. The level of accuracy in the approximate derivatives is controlled by localized error estimates to ensure global convergence of the method. Additional constraints on the model parameters are typically cheap to compute without the need for further simulations. Hence, including them in the trust-region subproblem introduces only a small computational overhead, but ensures feasibility of the model in every iteration. We show examples with homogenization constraints derived from effective medium theory (i.e. all fine-scale updates must upscale to a physically meaningful long-wavelength model). Salvus has a built-in workflow management framework to automate the inversion with interfaces to user-defined misfit functionals and data structures. This significantly reduces the amount of manual user interaction and enhances reproducibility which we demonstrate for several applications from the laboratory to global scale.

  12. Spectral filtering of gradient for l2-norm frequency-domain elastic waveform inversion

    NASA Astrophysics Data System (ADS)

    Oh, Ju-Won; Min, Dong-Joo

    2013-05-01

    To enhance the robustness of the l2-norm elastic full-waveform inversion (FWI), we propose a denoise function that is incorporated into single-frequency gradients. Because field data are noisy and modelled data are noise-free, the denoise function is designed based on the ratio of modelled data to field data summed over shots and receivers. We first take the sums of the modelled data and field data over shots, then take the sums of the absolute values of the resultant modelled data and field data over the receivers. Due to the monochromatic property of wavefields at each frequency, signals in both modelled and field data tend to be cancelled out or maintained, whereas certain types of noise, particularly random noise, can be amplified in field data. As a result, the spectral distribution of the denoise function is inversely proportional to the ratio of noise to signal at each frequency, which helps prevent the noise-dominant gradients from contributing to model parameter updates. Numerical examples show that the spectral distribution of the denoise function resembles a frequency filter that is determined by the spectrum of the signal-to-noise (S/N) ratio during the inversion process, with little human intervention. The denoise function is applied to the elastic FWI of synthetic data, with three types of random noise generated by the modified version of the Marmousi-2 model: white, low-frequency and high-frequency random noises. Based on the spectrum of S/N ratios at each frequency, the denoise function mainly suppresses noise-dominant single-frequency gradients, which improves the inversion results at the cost of spatial resolution.

  13. Detailed Velocity and Density models of the Cascadia Subduction Zone from Prestack Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Fortin, W.; Holbrook, W. S.; Mallick, S.; Everson, E. D.; Tobin, H. J.; Keranen, K. M.

    2014-12-01

    Understanding the geologic composition of the Cascadia Subduction Zone (CSZ) is critically important in assessing seismic hazards in the Pacific Northwest. Despite being a potential earthquake and tsunami threat to millions of people, key details of the structure and fault mechanisms remain poorly understood in the CSZ. In particular, the position and character of the subduction interface remains elusive due to its relative aseismicity and low seismic reflectivity, making imaging difficult for both passive and active source methods. Modern active-source reflection seismic data acquired as part of the COAST project in 2012 provide an opportunity to study the transition from the Cascadia basin, across the deformation front, and into the accretionary prism. Coupled with advances in seismic inversion methods, this new data allow us to produce detailed velocity models of the CSZ and accurate pre-stack depth migrations for studying geologic structure. While still computationally expensive, current computing clusters can perform seismic inversions at resolutions that match that of the seismic image itself. Here we present pre-stack full waveform inversions of the central seismic line of the COAST survey offshore Washington state. The resultant velocity model is produced by inversion at every CMP location, 6.25 m laterally, with vertical resolution of 0.2 times the dominant seismic frequency. We report a good average correlation value above 0.8 across the entire seismic line, determined by comparing synthetic gathers to the real pre-stack gathers. These detailed velocity models, both Vp and Vs, along with the density model, are a necessary step toward a detailed porosity cross section to be used to determine the role of fluids in the CSZ. Additionally, the P-velocity model is used to produce a pre-stack depth migration image of the CSZ.

  14. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  15. Fine-scale thermohaline ocean structure retrieved with 2-D prestack full-waveform inversion of multichannel seismic data: Application to the Gulf of Cadiz (SW Iberia)

    NASA Astrophysics Data System (ADS)

    Dagnino, D.; Sallarès, V.; Biescas, B.; Ranero, C. R.

    2016-08-01

    This work demonstrates the feasibility of 2-D time-domain, adjoint-state acoustic full-waveform inversion (FWI) to retrieve high-resolution models of ocean physical parameters such as sound speed, temperature and salinity. The proposed method is first described and then applied to prestack multichannel seismic (MCS) data acquired in the Gulf of Cadiz (SW Iberia) in 2007 in the framework of the Geophysical Oceanography project. The inversion strategy flow includes specifically designed data preconditioning for acoustic noise reduction, followed by the inversion of sound speed in the shotgather domain. We show that the final sound speed model has a horizontal resolution of ˜ 70 m, which is two orders of magnitude better than that of the initial model constructed with coincident eXpendable Bathy Thermograph (XBT) data, and close to the theoretical resolution of O(λ). Temperature (T) and salinity (S) are retrieved with the same lateral resolution as sound speed by combining the inverted sound speed model with the thermodynamic equation of seawater and a local, depth-dependent T-S relation derived from regional conductivity-temperature-depth (CTD) measurements of the National Oceanic and Atmospheric Administration (NOAA) database. The comparison of the inverted T and S models with XBT and CTD casts deployed simultaneously to the MCS acquisition shows that the thermohaline contrasts are resolved with an accuracy of 0.18oC for temperature and 0.08 PSU for salinity. The combination of oceanographic and MCS data into a common, pseudo-automatic inversion scheme allows to quantitatively resolve submeso-scale features that ought to be incorporated into larger-scale ocean models of oceans structure and circulation.

  16. An Algorithm for Real-Time Pulse Waveform Segmentation and Artifact Detection in Photoplethysmograms.

    PubMed

    Fischer, Christoph; Domer, Benno; Wibmer, Thomas; Penzel, Thomas

    2017-03-01

    Photoplethysmography has been used in a wide range of medical devices for measuring oxygen saturation, cardiac output, assessing autonomic function, and detecting peripheral vascular disease. Artifacts can render the photoplethysmogram (PPG) useless. Thus, algorithms capable of identifying artifacts are critically important. However, the published PPG algorithms are limited in algorithm and study design. Therefore, the authors developed a novel embedded algorithm for real-time pulse waveform (PWF) segmentation and artifact detection based on a contour analysis in the time domain. This paper provides an overview about PWF and artifact classifications, presents the developed PWF analysis, and demonstrates the implementation on a 32-bit ARM core microcontroller. The PWF analysis was validated with data records from 63 subjects acquired in a sleep laboratory, ergometry laboratory, and intensive care unit in equal parts. The output of the algorithm was compared with harmonized experts' annotations of the PPG with a total duration of 31.5 h. The algorithm achieved a beat-to-beat comparison sensitivity of 99.6%, specificity of 90.5%, precision of 98.5%, and accuracy of 98.3%. The interrater agreement expressed as Cohen's kappa coefficient was 0.927 and as F-measure was 0.990. In conclusion, the PWF analysis seems to be a suitable method for PPG signal quality determination, real-time annotation, data compression, and calculation of additional pulse wave metrics such as amplitude, duration, and rise time.

  17. Fast in-memory elastic full-waveform inversion using consumer-grade GPUs

    NASA Astrophysics Data System (ADS)

    Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge

    2017-04-01

    Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.

  18. Wavelet analysis of the impedance cardiogram waveforms

    NASA Astrophysics Data System (ADS)

    Podtaev, S.; Stepanov, R.; Dumler, A.; Chugainov, S.; Tziberkin, K.

    2012-12-01

    Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt)max) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.

  19. Dynamic Source Inversion of a M6.5 Intraslab Earthquake in Mexico: Application of a New Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.

    2013-05-01

    We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop, radiated energy, fracture energy, radiation efficiency, rupture velocity and moment magnitude, respectively. Mw6.5 intraslab Zumpango earthquake location, stations location and tectonic setting in central Mexico

  20. A review of ocean chlorophyll algorithms and primary production models

    NASA Astrophysics Data System (ADS)

    Li, Jingwen; Zhou, Song; Lv, Nan

    2015-12-01

    This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model.

  1. Platform for Postprocessing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don

    2008-01-01

    Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).

  2. Adaptive re-tracking algorithm for retrieval of water level variations and wave heights from satellite altimetry data for middle-sized inland water bodies

    NASA Astrophysics Data System (ADS)

    Troitskaya, Yuliya; Lebedev, Sergey; Soustova, Irina; Rybushkina, Galina; Papko, Vladislav; Baidakov, Georgy; Panyutin, Andrey

    One of the recent applications of satellite altimetry originally designed for measurements of the sea level [1] is associated with remote investigation of the water level of inland waters: lakes, rivers, reservoirs [2-7]. The altimetry data re-tracking algorithms developed for open ocean conditions (e.g. Ocean-1,2) [1] often cannot be used in these cases, since the radar return is significantly contaminated by reflection from the land. The problem of minimization of errors in the water level retrieval for inland waters from altimetry measurements can be resolved by re-tracking satellite altimetry data. Recently, special re-tracking algorithms have been actively developed for re-processing altimetry data in the coastal zone when reflection from land strongly affects echo shapes: threshold re-tracking, The other methods of re-tracking (threshold re-tracking, beta-re-tracking, improved threshold re-tracking) were developed in [9-11]. The latest development in this field is PISTACH product [12], in which retracking bases on the classification of typical forms of telemetric waveforms in the coastal zones and inland water bodies. In this paper a novel method of regional adaptive re-tracking based on constructing a theoretical model describing the formation of telemetric waveforms by reflection from the piecewise constant model surface corresponding to the geography of the region is considered. It was proposed in [13, 14], where the algorithm for assessing water level in inland water bodies and in the coastal zone of the ocean with an error of about 10-15 cm was constructed. The algorithm includes four consecutive steps: - constructing a local piecewise model of a reflecting surface in the neighbourhood of the reservoir; - solving a direct problem by calculating the reflected waveforms within the framework of the model; - imposing restrictions and validity criteria for the algorithm based on waveform modelling; - solving the inverse problem by retrieving a tracking point by the improved threshold algorithm. The possibility of determination of significant wave height (SWH) in the lakes through a two-step adaptive retracking is also studied. Calculation of the parameter SWH for Gorky Reservoir from May 2010 to March 2014 showed the anomalously high values of SWH, derived from altimetry data [15], which means that the calibration of this SWH for inland waters is required. Calibration ground measurements were performed at Gorky reservoir in 2011-2013, when wave height, wind speed and air temperature were collected by equipment placed on a buoy [15] collocated with Jason-1 and Jason-2 altimetry data acquisition. The results obtained on the basis of standard algorithm and method for adaptive re-tracking at Rybinsk , Gorky , Kuibyshev , Saratov and Volgograd reservoirs and middle-sized lakes of Russia: Chany, Segozero, Hanko, Oneko, Beloye, water areas of which are intersected by the Jason-1,2 tracks, were compared and their correlation with the observed data of hydrological stations in reservoirs and lakes was investigated. It was noted that the Volgograd reservoir regional re-tracking to determine the water level , while the standard GDR data are practically absent. REFERENCES [1] AVISO/Altimetry. User Handbook. Merged TOPEX/ POSEIDON Products. Edition 3.0. AVISO. Toulouse., 1996. [2] C.M. Birkett et al., “Surface water dynamics in the Amazon Basin: Application of satellite radar altimetry,” J. Geophys. Res., vol. 107, pp. 8059, 2002. [3] G. Brown, “The average impulse response of a rough surface and its applications,” IEEE Trans. Antennas Propagat., vol. 25, pp. 67-74, 1977. [4] I.O. Campos et al., “Temporal variations of river basin waters from Topex/Poseidon satellite altimetry. Application to the Amazon basin,” Earth and Planetary Sciences, vol. 333, pp. 633-643, 2001. [5] A.V. Kouraev et al., “Ob’ river discharge from TOPEX/Poseidon satellite altimetry (1992-2002),” Rem. Sens. Environ., vol. 93, pp. 238-245, 2004. [6] S.A. Lebedev, and A.G. Kostianoy, Satellite Altimetry of the Caspian Sea. Moscow : MORE, [in Russian], 2005. [7] P. A. M. Berry et al., “Global inland water monitoring from multi-mission altimetry”, Geophys. Res. Lett., vol. 32, pp. L16401, 2005. [8] S. Calmant, and F. Seyler, “Continental surface waters from satellite altimetry,” Geosciences C.R., vol. 338, pp. 1113-1122, 2006. [9] C. H. Davis, “A robust threshold retracking algorithm for measuring ice sheet surface elevation change from satellite radar altimeters,” IEEE Trans. Geosci. Remote Sens., vol. 35, pp. 974-979, 1997. [10] X. Deng, and W. E. Featherstone, “A coastal retracking system for satellite radar altimeter waveforms: Application to ERS-2 around Australia,” J. Geophys. Res., vol. 111, pp. C06012, 2006. [11] J. Guo et al., “Lake level variations monitored with satellite altimetry waveform retracking,” IEEE J. Sel. Topics Appl. Earth Obs., vol. 2(2), pp. 80-86, 2009. [12] Mercier F., Rosmorduc V., Carrere L., Thibaut P., Coastal and Hydrology Altimetry product (PISTACH) handbook. Version 1.0. 2010. [13] Yu.Troitskaya et al., “Satellite altimetry of inland water bodies,” Water Resources, vol. 39(2), pp.184-199, 2012. [14] Yu.Troitskaya et al., “Adaptive retracking of Jason-1 altimetry data for inland waters: the example of the Gorky Reservoir”, Int. J. Rem. Sens., vol. 33, pp. 7559-7578, 2012. [15] Yu.Troitskaya et al., , "Adaptive retracking of Jason-1,2 satellite altimetry data for the Volga river reservoirs ," IEEE J. Sel. Topics Appl. Earth Obs., issue 99, 2013, doi: 10.1109/JSTARS.2013.2267092.

  3. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  4. Seismotectonics at the junction of the Philippine Sea plate and the Eurasian plate, in light of the 1990 Hualien earthquake and the near-field waveform inversion

    NASA Astrophysics Data System (ADS)

    Cheng, Hou-Sheng; Mozziconacci, Laetitia; Chang, Emmy T. Y.; Huang, Bor-Shouh

    2016-04-01

    In eastern Taiwan, the Longitudinal Valley (LV) is the suture zone separating the Eurasian plate (EUP) to the West from the Philippine Sea plate (PSP) to the East. The northern tip of the LV (near Hualien city) is the junction point where the collision evolve northward to a subduction of the PSP under the EUP. As a result, a high seismic activity is observed. Based on the CWB (Central Weather Bureau, Taiwan) earthquake catalog, four distinct seismic clusters can be observed in this area since 1990. We restrict our effort to the cluster caused by a doublet events of 1990 with two moderate-large earthquakes. The first shock of these doublet occurred on 13rd December with ML 6.5. Seventeen hours later and 15 km to the southeast occurred the second shock of ML 6.7. A campaign seismic network of 15 short-period stations - the Hualien Temporary Seismic Network (HTSN) was deployed during 2 months to detect the aftershocks of the doublet. By applying the near-field waveform inversion to the HTSN records, we can retrieve the focal mechanism solutions (FMS) from 50 aftershocks of local magnitude ranging from 2.5 to 5.0. A modified version of the program "FMNEAR" is adopted in this study, which has been proven to be efficient to retrieve FMS for small-to-moderate earthquakes with a limited number of stations. In practice, the near-field waveforms, were processed by band-pass filter between 0.52 and 1.2 Hz. Synthetic waveforms are built from the discrete wave number method of Bouchon (1981). The inversion is done by grid searches on the FMS parameters while the rake is inverted, the best result gives the lowest waveform misfit. The waveform adjustment are improved by depth optimization and a specific 1D velocity model for each station. Focal depths of events are in average 10km deeper than the depth determined by the island-wide seismic stations that suffered from the lack of stations to the east due to the ocean. The FMS of the 50 aftershocks can be classified into three groups according to their mechanisms and the P- and T-axes. The 3 groups distribute from north to south. The northern one is the largest one and is located along the northern and middle part of the northern segment of the LVF (NLVF). It is mainly reverse in type and display homogeneous FMS. Our hypothesis is that the fault generated the doublet is related to the structure activated by this first group. In the middle part, the second group is dominantly normal while the last group spreads in the southern portion of the NLVF with more strike-slip events.

  5. An optimized, universal hardware-based adaptive correlation receiver architecture

    NASA Astrophysics Data System (ADS)

    Zhu, Zaidi; Suarez, Hernan; Zhang, Yan; Wang, Shang

    2014-05-01

    The traditional radar RF transceivers, similar to communication transceivers, have the basic elements such as baseband waveform processing, IF/RF up-down conversion, transmitter power circuits, receiver front-ends, and antennas, which are shown in the upper half of Figure 1. For modern radars with diversified and sophisticated waveforms, we can frequently observe that the transceiver behaviors, especially nonlinear behaviors, are depending on the waveform amplitudes, frequency contents and instantaneous phases. Usually, it is a troublesome process to tune an RF transceiver to optimum when different waveforms are used. Another issue arises from the interference caused by the waveforms - for example, the range side-lobe (RSL) caused by the waveforms, once the signals pass through the entire transceiver chain, may be further increased due to distortions. This study is inspired by the two existing solutions from commercial communication industry, digital pre-distortion (DPD) and adaptive channel estimation and Interference Mitigation (AIM), while combining these technologies into a single chip or board that can be inserted into the existing transceiver system. This device is then named RF Transceiver Optimizer (RTO). The lower half of Figure 1 shows the basic element of RTO. With RTO, the digital baseband processing does not need to take into account the transceiver performance with diversified waveforms, such as the transmitter efficiency and chain distortion (and the intermodulation products caused by distortions). Neither does it need to concern the pulse compression (or correlation receiver) process and the related mitigation. The focus is simply the information about the ground truth carried by the main peak of correlation receiver outputs. RTO can be considered as an extension of the existing calibration process, while it has the benefits of automatic, adaptive and universal. Currently, the main techniques to implement the RTO are the digital pre- or -post distortions (DPD), and the main technique to implement the AIM is the Adaptive Pulse Compression (APC). The basic algorithms and experiments with DPD will be introduced which is also the focus of this paper. The discussion of AIM algorithms will be presented in other papers, while the initial implementation of AIM and correlation receiver in FPGA devices will also be introduced in this paper.

  6. Integration of ALS and TLS for calibration and validation of LAI profiles from large footprint lidar

    NASA Astrophysics Data System (ADS)

    Armston, J.; Tang, H.; Hancock, S.; Hofton, M. A.; Dubayah, R.; Duncanson, L.; Fatoyinbo, T. E.; Blair, J. B.; Disney, M.

    2016-12-01

    The Global Ecosystem Dynamics Investigation (GEDI) is designed to provide measurements of forest vertical structure and above-ground biomass density (AGBD) over tropical and temperate regions. The GEDI is a multi-beam waveform lidar that will acquire transects of forest canopy vertical profiles in conditions of up to 99% canopy cover. These are used to produce a number of canopy height and profile metrics to model habitat suitability and AGBD. These metrics include vertical leaf area index (LAI) profiles, which require some pre-launch refinement of large-footprint waveform processing methods for separating canopy and ground returns and estimation of their reflectance. Previous research developments in modelling canopy gap probability to derive canopy and ground reflectance from waveforms have primarily used data from small-footprint instruments, however development of a generalized spatial model with uncertainty will be useful for interpreting and modelling waveforms from large-footprint instruments such as the NASA Land Vegetation and Ice Sensor (LVIS) with a view to implementation for GEDI. Here we present an analysis of waveform lidar data from the NASA Land Vegetation and Ice Sensor (LVIS), which were acquired in Gabon in February 2016 to support the NASA/ESA AfriSAR campaign. AfriSAR presents a unique opportunity to test refined methods for retrieval of LAI profiles in high above-ground biomass rainforests (up to 600 Mg/ha) with dense canopies (>90% cover), where the greatest uncertainty exists. Airborne and Terrestrial Laser Scanning data (TLS) were also collected, enabling quantification of algorithm performance in plots of dense canopy cover. Refinement of canopy gap probability and LAI profile modelling from large-footprint lidar was based on solving for canopy and ground reflectance parameters spatially by penalized least-squares. The sensitivities of retrieved cover and LAI profiles to variation in canopy and ground reflectance showed improvement compared to assuming a constant ratio. We evaluated the use of spatially proximate simple waveforms to interpret more complex waveforms with poor separation of canopy and ground returns. This work has direct implications for GEDI algorithm refinement.

  7. Applying MDA to SDR for Space to Model Real-time Issues

    NASA Technical Reports Server (NTRS)

    Blaser, Tammy M.

    2007-01-01

    NASA space communications systems have the challenge of designing SDRs with highly-constrained Size, Weight and Power (SWaP) resources. A study is being conducted to assess the effectiveness of applying the MDA Platform-Independent Model (PIM) and one or more Platform-Specific Models (PSM) specifically to address NASA space domain real-time issues. This paper will summarize our experiences with applying MDA to SDR for Space to model real-time issues. Real-time issues to be examined, measured, and analyzed are: meeting waveform timing requirements and efficiently applying Real-time Operating System (RTOS) scheduling algorithms, applying safety control measures, and SWaP verification. Real-time waveform algorithms benchmarked with the worst case environment conditions under the heaviest workload will drive the SDR for Space real-time PSM design.

  8. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  9. Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.

    NASA Astrophysics Data System (ADS)

    Giridhar, K.

    The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.

  10. Black hole algorithm for determining model parameter in self-potential data

    NASA Astrophysics Data System (ADS)

    Sungkono; Warnana, Dwa Desa

    2018-01-01

    Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.

  11. Algorithms to qualify respiratory data collected during the transport of trauma patients.

    PubMed

    Chen, Liangyou; McKenna, Thomas; Reisner, Andrew; Reifman, Jaques

    2006-09-01

    We developed a quality indexing system to numerically qualify respiratory data collected by vital-sign monitors in order to support reliable post-hoc mining of respiratory data. Each monitor-provided (reference) respiratory rate (RR(R)) is evaluated, second-by-second, to quantify the reliability of the rate with a quality index (QI(R)). The quality index is calculated from: (1) a breath identification algorithm that identifies breaths of 'typical' sizes and recalculates the respiratory rate (RR(C)); (2) an evaluation of the respiratory waveform quality (QI(W)) by assessing waveform ambiguities as they impact the calculation of respiratory rates and (3) decision rules that assign a QI(R) based on RR(R), RR(C) and QI(W). RR(C), QI(W) and QI(R) were compared to rates and quality indices independently determined by human experts, with the human measures used as the 'gold standard', for 163 randomly chosen 15 s respiratory waveform samples from our database. The RR(C) more closely matches the rates determined by human evaluation of the waveforms than does the RR(R) (difference of 3.2 +/- 4.6 breaths min(-1) versus 14.3 +/- 19.3 breaths min(-1), mean +/- STD, p < 0.05). Higher QI(W) is found to be associated with smaller differences between calculated and human-evaluated rates (average differences of 1.7 and 8.1 breaths min(-1) for the best and worst QI(W), respectively). Establishment of QI(W) and QI(R), which ranges from 0 for the worst-quality data to 3 for the best, provides a succinct quantitative measure that allows for automatic and systematic selection of respiratory waveforms and rates based on their data quality.

  12. Waveform Fingerprinting for Efficient Seismic Signal Detection

    NASA Astrophysics Data System (ADS)

    Yoon, C. E.; OReilly, O. J.; Beroza, G. C.

    2013-12-01

    Cross-correlating an earthquake waveform template with continuous waveform data has proven a powerful approach for detecting events missing from earthquake catalogs. If templates do not exist, it is possible to divide the waveform data into short overlapping time windows, then identify window pairs with similar waveforms. Applying these approaches to earthquake monitoring in seismic networks has tremendous potential to improve the completeness of earthquake catalogs, but because effort scales quadratically with time, it rapidly becomes computationally infeasible. We develop a fingerprinting technique to identify similar waveforms, using only a few compact features of the original data. The concept is similar to human fingerprints, which utilize key diagnostic features to identify people uniquely. Analogous audio-fingerprinting approaches have accurately and efficiently found similar audio clips within large databases; example applications include identifying songs and finding copyrighted content within YouTube videos. In order to fingerprint waveforms, we compute a spectrogram of the time series, and segment it into multiple overlapping windows (spectral images). For each spectral image, we apply a wavelet transform, and retain only the sign of the maximum magnitude wavelet coefficients. This procedure retains just the large-scale structure of the data, providing both robustness to noise and significant dimensionality reduction. Each fingerprint is a high-dimensional, sparse, binary data object that can be stored in a database without significant storage costs. Similar fingerprints within the database are efficiently searched using locality-sensitive hashing. We test this technique on waveform data from the Northern California Seismic Network that contains events not detected in the catalog. We show that this algorithm successfully identifies similar waveforms and detects uncataloged low magnitude events in addition to cataloged events, while running to completion faster than a comparison waveform autocorrelation code.

  13. 3-D CSEM data inversion algorithm based on simultaneously active multiple transmitters concept

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin Kumar; Israil, Mohammad

    2017-05-01

    We present an algorithm for efficient 3-D inversion of marine controlled-source electromagnetic data. The efficiency is achieved by exploiting the redundancy in data. The data redundancy is reduced by compressing the data through stacking of the response of transmitters which are in close proximity. This stacking is equivalent to synthesizing the data as if the multiple transmitters are simultaneously active. The redundancy in data, arising due to close transmitter spacing, has been studied through singular value analysis of the Jacobian formed in 1-D inversion. This study reveals that the transmitter spacing of 100 m, typically used in marine data acquisition, does result in redundancy in the data. In the proposed algorithm, the data are compressed through stacking which leads to both computational advantage and reduction in noise. The performance of the algorithm for noisy data is demonstrated through the studies on two types of noise, viz., uncorrelated additive noise and correlated non-additive noise. It is observed that in case of uncorrelated additive noise, up to a moderately high (10 percent) noise level the algorithm addresses the noise as effectively as the traditional full data inversion. However, when the noise level in the data is high (20 percent), the algorithm outperforms the traditional full data inversion in terms of data misfit. Similar results are obtained in case of correlated non-additive noise and the algorithm performs better if the level of noise is high. The inversion results of a real field data set are also presented to demonstrate the robustness of the algorithm. The significant computational advantage in all cases presented makes this algorithm a better choice.

  14. Uncertainties in the 2004 Sumatra–Andaman source through nonlinear stochastic inversion of tsunami waves

    PubMed Central

    Venugopal, M.; Roy, D.; Rajendran, K.; Guillas, S.; Dias, F.

    2017-01-01

    Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra–Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems. PMID:28989311

  15. Uncertainties in the 2004 Sumatra-Andaman source through nonlinear stochastic inversion of tsunami waves.

    PubMed

    Gopinathan, D; Venugopal, M; Roy, D; Rajendran, K; Guillas, S; Dias, F

    2017-09-01

    Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra-Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems.

  16. Seismic Structure of the Antarctic Upper Mantle and Transition Zone Unearthed by Full Waveform Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Lloyd, A. J.; Wiens, D.; Zhu, H.; Tromp, J.; Nyblade, A.; Anandakrishnan, S.; Aster, R. C.; Huerta, A. D.; Winberry, J. P.; Wilson, T. J.; Dalziel, I. W. D.; Hansen, S. E.; Shore, P.

    2017-12-01

    The upper mantle and transition zone beneath Antarctica and the surrounding ocean are among the poorest seismically imaged regions of the Earth's interior. Over the last 1.5 decades researchers have deployed several large temporary broadband seismic arrays focusing on major tectonic features in the Antarctic. The broader international community has also facilitated further instrumentation of the continent, often operating stations in additional regions. As of 2016, waveforms are available from almost 300 unique station locations. Using these stations along with 26 southern mid-latitude seismic stations we have imaged the seismic structure of the upper mantle and transition zone using full waveform adjoint techniques. The full waveform adjoint inversion assimilates phase observations from 3-component seismograms containing P, S, Rayleigh, and Love waves, including reflections and overtones, from 270 earthquakes (5.5 ≤ Mw ≤ 7.0) that occurred between 2001-2003 and 2007-2016. We present the major results of the full waveform adjoint inversion following 20 iterations, resulting in a continental-scale seismic model (ANT_20) with regional-scale resolution. Within East Antarctica, ANT_20 reveals internal seismic heterogeneity and differences in lithospheric thickness. For example, fast seismic velocities extending to 200-300 km depth are imaged beneath both Wilkes Land and the Gamburtsev Subglacial Mountains, whereas fast velocities only extend to 100-200 km depth beneath the Lambert Graben and Enderby Land. Furthermore, fast velocities are not found beneath portions of Dronning Maud Land, suggesting old cratonic lithosphere may be absent. Beneath West Antarctica slow upper mantle seismic velocities are imaged extending from the Balleny Island southward along the Transantarctic Mountains front, and broaden beneath the southern and northern portion of the mountain range. In addition, slow upper mantle velocities are imaged beneath the West Antarctic coast extending from Marie Byrd Land to the Antarctic Peninsula. This region of slow velocity only extends to 150-200 km depth beneath the Antarctic Peninsula, while elsewhere it extends to deeper upper mantle depths and possibly into the transition zone as well as offshore, suggesting two different geodynamic processes are at play.

  17. Probabilistic topographic maps from raw, full-waveform airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Jalobeanu, A.; Gonçalves, G. R.

    2011-12-01

    The main goal of the AutoProbaDTM project is to derive new methodologies to measure the topography and terrain characteristics using the latest full-waveform airborne LiDAR technology. It includes algorithmic development, implementation, and validation over a large test area. In the long run, we wish to develop techniques that are scalable and applicable to future satellite missions such as LIST (NASA Decadal Survey), to help perform efficient and accurate large-scale mapping. One of the biggest challenges is to develop fast ways to process huge volumes of raw data without compromising the accuracy and the physical consistency of the result. Over the past decades, significant progress has been made in digital elevation model (DEM) extraction and user interaction has been much reduced, however most algorithms are still supervised. Topographic surveys currently play a central role in sensor calibration and full automation is still an unsolved problem. Moreover, very few existing methods are currently able to propose a quantitative error map with the reconstructed DEM. Traditional validation and quality control only allow to check the discrepancy between the product and a set of reference points, lacking the ability to predict the actual uncertainty related to elevations at chosen locations. We plan to provide fast and automated techniques to derive topographic maps and to compute error maps as well, based on a probabilistic approach to modeling terrains and data acquisition, solving inverse problems and handling uncertainty. Bayesian inference provides a rigorous framework for model reconstruction and error propagation, treating all quantities as random, and combining sources of information optimally. In the future, the uncertainty maps shall help scientists put error bars on quantities derived from the models. In June 2011, 200 km2 of data were acquired (100 GB of binary files, half a billion waveforms) in central Portugal, over an area of geomorphological and ecological interest, using a Riegl LMS-Q680i sensor. We managed to survey 140 km2 at a satisfactory sampling rate, the angular spacing matching the laser beam divergence and the ground spacing nearly equal to the footprint (almost 4 pts/m2 for a 50cm footprint at 1500 m AGL). We believe this is crucial for a correct processing as aliasing artifacts are significantly reduced, compared to common practice where the spacing is larger than the footprint size. A reverse engineering had to be done as the data were delivered in a proprietary, undocumented binary format, so we were able to read the waveforms and the essential timing and look angle parameters. An instrument model was developed to account for the overall impulse response and noise properties. The instrument was operated in the low signal to noise ratio regime to minimize the cost per km2, and the limits of state of the art processing methods are reached, hence the need for algorithmic development to achieve both a higher detection rate and an improved robustness. We will present the latest results from the first stage of the project: a large DEM of the bare ground topogaphy for the entire study area, including an error estimate for each point, the major novelty being the spatial variability of uncertainty.

  18. Effects of 3D Earth structure on W-phase CMT parameters

    NASA Astrophysics Data System (ADS)

    Morales, Catalina; Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo

    2017-04-01

    The source inversion of the W-phase has demonstrated a great potential to provide fast and reliable estimates of the centroid moment tensor (CMT) for moderate to large earthquakes. It has since been implemented in different operational environments (NEIC-USGS, PTWC, etc.) with the aim of providing rapid CMT solutions. These solutions are in particular useful for tsunami warning purposes. Computationally, W-phase waveforms are usually synthetized by summation of normal modes at long period (100 - 1000 s) for a spherical Earth model (e.g., PREM). Although the energy of these modes mainly stays in the mantle where lateral structural variations are relatively small, the impact of 3D heterogeneities on W-phase solutions have not yet been quantified. In this study, we investigate possible bias in W-phase source parameters due to unmodeled lateral structural heterogeneities. We generate a simulated dataset consisting of synthetic seismograms of large past earthquakes that accounts for the Earth's 3D structure. The W-phase algorithm is then used to invert the synthetic dataset for earthquake CMT parameters with and without added noise. Results show that the impact of 3D heterogeneities is generally larger for surface-waves than for W-phase waveforms. However, some discrepancies are noted between inverted W-phase parameters and target values. Particular attention is paid to the possible bias induced by the unmodeled 3D structure into the location of the W-phase centroid. Preliminary results indicate that the parameter that is most susceptible to 3D Earth structure seems to be the centroid depth.

  19. 3-D acoustic waveform simulation and inversion at Yasur Volcano, Vanuatu

    NASA Astrophysics Data System (ADS)

    Iezzi, A. M.; Fee, D.; Matoza, R. S.; Austin, A.; Jolly, A. D.; Kim, K.; Christenson, B. W.; Johnson, R.; Kilgour, G.; Garaebiti, E.; Kennedy, B.; Fitzgerald, R.; Key, N.

    2016-12-01

    Acoustic waveform inversion shows promise for improved eruption characterization that may inform volcano monitoring. Well-constrained inversion can provide robust estimates of volume and mass flux, increasing our ability to monitor volcanic emissions (potentially in real-time). Previous studies have made assumptions about the multipole source mechanism, which can be thought of as the combination of pressure fluctuations from a volume change, directionality, and turbulence. This infrasound source could not be well constrained up to this time due to infrasound sensors only being deployed on Earth's surface, so the assumption of no vertical dipole component has been made. In this study we deploy a high-density seismo-acoustic network, including multiple acoustic sensors along a tethered balloon around Yasur Volcano, Vanuatu. Yasur has frequent strombolian eruptions from any one of its three active vents within a 400 m diameter crater. The third dimension (vertical) of pressure sensor coverage allows us to begin to constrain the acoustic source components in a profound way, primarily the horizontal and vertical components and their previously uncharted contributions to volcano infrasound. The deployment also has a geochemical and visual component, including FLIR, FTIR, two scanning FLYSPECs, and a variety of visual imagery. Our analysis employs Finite-Difference Time-Domain (FDTD) modeling to obtain the full 3D Green's functions for each propagation path. This method, following Kim et al. (2015), takes into account realistic topographic scattering based on a digital elevation model created using structure-from-motion techniques. We then invert for the source location and source-time function, constraining the contribution of the vertical sound radiation to the source. The final outcome of this inversion is an infrasound-derived volume flux as a function of time, which we then compare to those derived independently from geochemical techniques as well as the inversion of seismic data. Kim, K., Fee, D., Yokoo, A., & Lees, J. M. (2015). Acoustic source inversion to estimate volume flux from volcanic explosions. Geophysical Research Letters, 42(13), 5243-5249

  20. One dimensional models of temperature and composition in the transition zone from a bayesian inversion of surface waves

    NASA Astrophysics Data System (ADS)

    Drilleau, M.; Beucler, E.; Mocquet, A.; Verhoeven, O.; Burgos, G.; Capdeville, Y.; Montagner, J.

    2011-12-01

    The transition zone plays a key role in the dynamics of the Earth's mantle, especially for the exchanges between the upper and the lower mantles. Phase transitions, convective motions, hot upwelling and/or cold downwelling materials may make the 400 to 1000 km depth range very anisotropic and heterogeneous, both thermally and chemically. A classical procedure to infer the thermal state and the composition is to interpret 3D velocity perturbation models in terms of temperature and mineralogical composition, with respect to a global 1D model. However, the strength of heterogeneity and anisotropy can be so high that the concept of a one-dimensional reference seismic model might be addressed for this depth range. Some recent studies prefer to directly invert seismic travel times and normal modes catalogues in terms of temperature and composition. Bayesian approach allows to go beyond the classical computation of the best fit model by providing a quantitative measure of model uncertainty. We implement a non linear inverse approach (Monte Carlo Markov Chains) to interpret seismic data in terms of temperature, anisotropy and composition. Two different data sets are used and compared : surface wave waveforms and phase velocities (fundamental mode and the first overtones). A guideline of this method is to let the resolution power of the data govern the spatial resolution of the model. Up to now, the model parameters are the temperature field and the mineralogical composition ; other important effects, such as macroscopic anisotropy, will be taken into account in the near future. In order to reduce the computing time of the Monte Carlo procedure, polynomial Bézier curves are used for the parameterization. This choice allows for smoothly varying models and first-order discontinuities. Our Bayesian algorithm is tested with standard circular synthetic experiments and with more realistic simulations including 3D wave propagation effects (SEM). The test results enhance the ability of this approach to match the three-component waveforms and address the question of the mean radial interpretation of a 3D model. The method is also tested using real datasets, such as along the Vanuatu-California path.

  1. Optimization of the Inverse Algorithm for Estimating the Optical Properties of Biological Materials Using Spatially-resolved Diffuse Reflectance Technique

    USDA-ARS?s Scientific Manuscript database

    Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...

  2. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    PubMed

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  3. 2D joint inversion of CSAMT and magnetic data based on cross-gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao

    2017-06-01

    A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.

  4. A novel PMT test system based on waveform sampling

    NASA Astrophysics Data System (ADS)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  5. Pickless event detection and location: The waveform correlation event detection system (WCEDS) revisited

    DOE PAGES

    Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford; ...

    2016-01-01

    The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less

  6. Rupture history of the 1997 Cariaco, Venezuela, earthquake from teleseismic P waves

    USGS Publications Warehouse

    Mendoza, C.

    2000-01-01

    A two-step finite-fault waveform inversion scheme is applied to the broadband teleseismic P waves recorded for the strike-slip, Cariaco, Venezuela, earthquake of 9 July 1997 to recover the distribution of mainshock slip. The earthquake is first analyzed using a long narrow fault with a maximum rise time of 20 sec. This line-source analysis indicates that slip propagated to the west with a constant rupture velocity and a relatively short rise time. The results are then used to constrain a second inversion of the P waveforms using a 60-km by 20-km two-dimensional fault. The rupture shows a zone of large slip (1.3-m peak) near the hypocenter and a second, broader source extending updip and to the west at depths shallower than 5 km. The second source has a peak slip of 2.1 meters and accounts for most of the moment of 1.1 × 1026 dyne-cm (6.6 Mww) estimated from the P waves. The inferred rupture pattern is consistent with macroseismic effects observed in the epicentral area.

  7. Accumulated energy norm for full waveform inversion of marine data

    NASA Astrophysics Data System (ADS)

    Shin, Changsoo; Ha, Wansoo

    2017-12-01

    Macro-velocity models are important for imaging the subsurface structure. However, the conventional objective functions of full waveform inversion in the time and the frequency domain have a limited ability to recover the macro-velocity model because of the absence of low-frequency information. In this study, we propose new objective functions that can recover the macro-velocity model by minimizing the difference between the zero-frequency components of the square of seismic traces. Instead of the seismic trace itself, we use the square of the trace, which contains low-frequency information. We apply several time windows to the trace and obtain zero-frequency information of the squared trace for each time window. The shape of the new objective functions shows that they are suitable for local optimization methods. Since we use the acoustic wave equation in this study, this method can be used for deep-sea marine data, in which elastic effects can be ignored. We show that the zero-frequency components of the square of the seismic traces can be used to recover macro-velocities from synthetic and field data.

  8. Noise suppression in surface microseismic data

    USGS Publications Warehouse

    Forghani-Arani, Farnoush; Batzle, Mike; Behura, Jyoti; Willis, Mark; Haines, Seth S.; Davidson, Michael

    2012-01-01

    We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform. We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform.

  9. The tsunami source area of the 2003 Tokachi-oki earthquake estimated from tsunami travel times and its relationship to the 1952 Tokachi-oki earthquake

    USGS Publications Warehouse

    Hirata, K.; Tanioka, Y.; Satake, K.; Yamaki, S.; Geist, E.L.

    2004-01-01

    We estimate the tsunami source area of the 2003 Tokachi-oki earthquake (Mw 8.0) from observed tsunami travel times at 17 Japanese tide gauge stations. The estimated tsunami source area (???1.4 ?? 104 km2) coincides with the western-half of the ocean-bottom deformation area (???2.52 ?? 104 km2) of the 1952 Tokachi-oki earthquake (Mw 8.1), previously inferred from tsunami waveform inversion. This suggests that the 2003 event ruptured only the western-half of the 1952 rupture extent. Geographical distribution of the maximum tsunami heights in 2003 differs significantly from that of the 1952 tsunami, supporting this hypothesis. Analysis of first-peak tsunami travel times indicates that a major uplift of the ocean-bottom occurred approximately 30 km to the NNW of the mainshock epicenter, just above a major asperity inferred from seismic waveform inversion. Copyright ?? The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences.

  10. Time-domain wavefield reconstruction inversion

    NASA Astrophysics Data System (ADS)

    Li, Zhen-Chun; Lin, Yu-Zhao; Zhang, Kai; Li, Yuan-Yuan; Yu, Zhen-Nan

    2017-12-01

    Wavefield reconstruction inversion (WRI) is an improved full waveform inversion theory that has been proposed in recent years. WRI method expands the searching space by introducing the wave equation into the objective function and reconstructing the wavefield to update model parameters, thereby improving the computing efficiency and mitigating the influence of the local minimum. However, frequency-domain WRI is difficult to apply to real seismic data because of the high computational memory demand and requirement of time-frequency transformation with additional computational costs. In this paper, wavefield reconstruction inversion theory is extended into the time domain, the augmented wave equation of WRI is derived in the time domain, and the model gradient is modified according to the numerical test with anomalies. The examples of synthetic data illustrate the accuracy of time-domain WRI and the low dependency of WRI on low-frequency information.

  11. A Report Of The December 6, 2016 Mw 6.5 Pidie Jaya, Aceh Earthquake

    NASA Astrophysics Data System (ADS)

    Muzli, M.; Daniarsyad, G.; Nugraha, A. D.; Muksin, U.; Widiyantoro, S.; Bradley, K.; Wang, T.; Jousset, P. G.; Erbas, K.; Nurdin, I.; Wei, S.

    2017-12-01

    The December 6, 2016 Mw 6.5 earthquake in Pidie Jaya, Aceh was one of the devastating inland earthquakes in Sumatra that took away more than 100 people's life. Here we present our seismological analysis of the earthquake sequence. The earthquake focal mechanism inversions using regional BMKG broadband data and teleseismic waveform data all indicate a strike-slip focal mechanism with a centroid depth of 15 km. Preliminary finite fault inversion using teleseismic body waves prefers the fault plane with strike of 45 degree and dip of 50 degree, in agreement with the surface geology and USGS aftershock distributions. Nine broadband seismic stations were installed in the source region along the coast one week after the earthquake and have collected the data for one month. The data have been used to locate aftershocks with grid search and double-difference algorithm, which results in the lineup of the seismicity in NE-SW direction, in agreement with the fault inversion and geology results. Using the M4.0 calibration earthquake that was recorded by the temporally network, we relocated the mainshock epicenter, which is also consistent with fault geometry defined by the well located aftershocks. In addition, a portion of the seismicity shows a lineation in E-W direction, indicating a secondary fault that has not been identified before. Aftershock focal mechanisms determined by the first motion reveal similar solutions as the mainshock. The observed macro intensity data shows most of the damaged buildings are distributed along the coast, approximately perpendicular to the preferred fault strike instead of parallel with it. It appears that the distribution of damage is strongly related to the site conditions, since these strong shaking/damage regions are mainly located on the costal sedimentary soils.

  12. Earthquake source tensor inversion with the gCAP method and 3D Green's functions

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.

    2013-12-01

    We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.

  13. Point-source inversion techniques

    NASA Astrophysics Data System (ADS)

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  14. Rapid finite-fault inversions in Southern California using Cybershake Green's functions

    NASA Astrophysics Data System (ADS)

    Thio, H. K.; Polet, J.

    2017-12-01

    We have developed a system for rapid finite fault inversion for intermediate and large Southern California earthquakes using local, regional and teleseismic seismic waveforms as well as geodetic data. For modeling the local seismic data, we use 3D Green's functions from the Cybershake project, which were made available to us courtesy of the Southern California Earthquake Center (SCEC). The use of 3D Green's functions allows us to extend the inversion to higher frequency waveform data and smaller magnitude earthquakes, in addition to achieving improved solutions in general. The ultimate aim of this work is to develop the ability to provide high quality finite fault models within a few hours after any damaging earthquake in Southern California, so that they may be used as input to various post-earthquake assessment tools such as ShakeMap, as well as by the scientific community and other interested parties. Additionally, a systematic determination of finite fault models has value as a resource for scientific studies on detailed earthquake processes, such as rupture dynamics and scaling relations. We are using an established least-squares finite fault inversion method that has been applied extensively both on large as well as smaller regional earthquakes, in conjunction with the 3D Green's functions, where available, as well as 1D Green's functions for areas for which the Cybershake library has not yet been developed. We are carrying out validation and calibration of this system using significant earthquakes that have occurred in the region over the last two decades, spanning a range of locations and magnitudes (5.4 and higher).

  15. Crustal velocity structure and earthquake processes of Garhwal-Kumaun Himalaya: Constraints from regional waveform inversion and array beam modeling

    NASA Astrophysics Data System (ADS)

    Negi, Sanjay S.; Paul, Ajay; Cesca, Simone; Kamal; Kriegerowski, Marius; Mahesh, P.; Gupta, Sandeep

    2017-08-01

    In order to understand present day earthquake kinematics at the Indian plate boundary, we analyse seismic broadband data recorded between 2007 and 2015 by the regional network in the Garhwal-Kumaun region, northwest Himalaya. We first estimate a local 1-D velocity model for the computation of reliable Green's functions, based on 2837 P-wave and 2680 S-wave arrivals from 251 well located earthquakes. The resulting 1-D crustal structure yields a 4-layer velocity model down to the depths of 20 km. A fifth homogeneous layer extends down to 46 km, constraining the Moho using travel-time distance curve method. We then employ a multistep moment tensor (MT) inversion algorithm to infer seismic moment tensors of 11 moderate earthquakes with Mw magnitude in the range 4.0-5.0. The method provides a fast MT inversion for future monitoring of local seismicity, since Green's functions database has been prepared. To further support the moment tensor solutions, we additionally model P phase beams at seismic arrays at teleseismic distances. The MT inversion result reveals the presence of dominant thrust fault kinematics persisting along the Himalayan belt. Shallow low and high angle thrust faulting is the dominating mechanism in the Garhwal-Kumaun Himalaya. The centroid depths for these moderate earthquakes are shallow between 1 and 12 km. The beam modeling result confirm hypocentral depth estimates between 1 and 7 km. The updated seismicity, constrained source mechanism and depth results indicate typical setting of duplexes above the mid crustal ramp where slip is confirmed along out-of-sequence thrusting. The involvement of Tons thrust sheet in out-of-sequence thrusting indicate Tons thrust to be the principal active thrust at shallow depth in the Himalayan region. Our results thus support the critical taper wedge theory, where we infer the microseismicity cluster as a result of intense activity within the Lesser Himalayan Duplex (LHD) system.

  16. Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.; Esmaeili, S.

    2015-12-01

    We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.

  17. Variable Grid Traveltime Tomography for Near-surface Seismic Imaging

    NASA Astrophysics Data System (ADS)

    Cai, A.; Zhang, J.

    2017-12-01

    We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.

  18. Spectral-element global waveform tomography: A second-generation upper-mantle model

    NASA Astrophysics Data System (ADS)

    French, S. W.; Lekic, V.; Romanowicz, B. A.

    2012-12-01

    The SEMum model of Lekic and Romanowicz (2011a) was the first global upper-mantle VS model obtained using whole-waveform inversion with spectral element (SEM: Komatitsch and Vilotte, 1998) forward modeling of time domain three component waveforms. SEMum exhibits stronger amplitudes of heterogeneity in the upper 200km of the mantle compared to previous global models - particularly with respect to low-velocity anomalies. To make SEM-based waveform inversion tractable at global scales, SEMum was developed using: (1) a version of SEM coupled to 1D mode computation in the earth's core (C-SEM, Capdeville et al., 2003); (2) asymptotic normal-mode sensitivity kernels, incorporating multiple forward scattering and finite-frequency effects in the great-circle plane (NACT: Li and Romanowicz, 1995); and (3) a smooth anisotropic crustal layer of uniform 60km thickness, designed to match global surface-wave dispersion while reducing the cost of time integration in the SEM. The use of asymptotic kernels reduced the number of SEM computations considerably (≥ 3x) relative to purely numerical approaches (e.g. Tarantola, 1984), while remaining sufficiently accurate at the periods of interest (down to 60s). However, while the choice of a 60km crustal-layer thickness is justifiable in the continents, it can complicate interpretation of shallow oceanic upper-mantle structure. We here present an update to the SEMum model, designed primarily to address these concerns. The resulting model, SEMum2, was derived using a crustal layer that again fits global surface-wave dispersion, but with a more geologically consistent laterally varying thickness: approximately honoring Crust2.0 (Bassin, et al., 2000) Moho depth in the continents, while saturating at 30km in the oceans. We demonstrate that this approach does not bias our upper mantle model, which is constrained not only by fundamental mode surface waves, but also by overtone waveforms. We have also improved our data-selection and assimilation scheme, more readily allowing for additional and higher-quality data to be incorporated into our inversion as the model improves. Further, we have been able to refine the parameterization of the isotropic component of our model, previously limited by our ability to solve the large dense linear system that governs model updates (Tarantola and Valette, 1982). The construction of SEMum2 involved 3 additional inversion iterations away from SEMum. Overall, the combined effect of these improvements confirms and validates the general structure of the original SEMum. Model amplitudes remain an impressive feature in SEMum2, wherein peak-to-peak variation in VS can exceed 15% in close lateral juxtaposition. Further, many intriguing structures present in SEMum are now imaged with improved resolution in the updated model. In particular, the geographic extents of the anomalous oceanic cluster identified by Lekic and Romanowicz (2011b) are consistent with our findings and now allow us to further identify alternating bands of lower and higher velocities in the 200-300km depth range beneath the Pacific basin, with a characteristic spacing of ˜2000km normal to absolute plate motion. Possible dynamic interpretation of these and other features in the ocean basins is explored in a companion presentation (Romanowicz et al., this meeting).

  19. A model reduction approach to numerical inversion for a parabolic partial differential equation

    NASA Astrophysics Data System (ADS)

    Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail

    2014-12-01

    We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.

  20. Simplified moment tensor analysis and unified decomposition of acoustic emission source: Application to in situ hydrofracturing test

    NASA Astrophysics Data System (ADS)

    Ohtsu, Masayasu

    1991-04-01

    An application of a moment tensor analysis to acoustic emission (AE) is studied to elucidate crack types and orientations of AE sources. In the analysis, simplified treatment is desirable, because hundreds of AE records are obtained from just one experiment and thus sophisticated treatment is realistically cumbersome. Consequently, a moment tensor inversion based on P wave amplitude is employed to determine six independent tensor components. Selecting only P wave portion from the full-space Green's function of homogeneous and isotropic material, a computer code named SiGMA (simplified Green's functions for the moment tensor analysis) is developed for the AE inversion analysis. To classify crack type and to determine crack orientation from moment tensor components, a unified decomposition of eigenvalues into a double-couple (DC) part, a compensated linear vector dipole (CLVD) part, and an isotropic part is proposed. The aim of the decomposition is to determine the proportion of shear contribution (DC) and tensile contribution (CLVD + isotropic) on AE sources and to classify cracks into a crack type of the dominant motion. Crack orientations determined from eigenvectors are presented as crack-opening vectors for tensile cracks and fault motion vectors for shear cracks, instead of stereonets. The SiGMA inversion and the unified decomposition are applied to synthetic data and AE waveforms detected during an in situ hydrofracturing test. To check the accuracy of the procedure, numerical experiments are performed on the synthetic waveforms, including cases with 10% random noise added. Results show reasonable agreement with assumed crack configurations. Although the maximum error is approximately 10% with respect to the ratios, the differences on crack orientations are less than 7°. AE waveforms detected by eight accelerometers deployed during the hydrofracturing test are analyzed. Crack types and orientations determined are in reasonable agreement with a predicted failure plane from borehole TV observation. The results suggest that tensile cracks are generated first at weak seams and then shear cracks follow on the opened joints.

Top