Sample records for seismic source models

  1. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    USGS Publications Warehouse

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  2. Micro-seismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  3. An alternative approach to probabilistic seismic hazard analysis in the Aegean region using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Weatherill, Graeme; Burton, Paul W.

    2010-09-01

    The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.

  4. A Seismic Source Model for Central Europe and Italy

    NASA Astrophysics Data System (ADS)

    Nyst, M.; Williams, C.; Onur, T.

    2006-12-01

    We present a seismic source model for Central Europe (Belgium, Germany, Switzerland, and Austria) and Italy, as part of an overall seismic risk and loss modeling project for this region. A separate presentation at this conference discusses the probabilistic seismic hazard and risk assessment (Williams et al., 2006). Where available we adopt regional consensus models and adjusts these to fit our format, otherwise we develop our own model. Our seismic source model covers the whole region under consideration and consists of the following components: 1. A subduction zone environment in Calabria, SE Italy, with interface events between the Eurasian and African plates and intraslab events within the subducting slab. The subduction zone interface is parameterized as a set of dipping area sources that follow the geometry of the surface of the subducting plate, whereas intraslab events are modeled as plane sources at depth; 2. The main normal faults in the upper crust along the Apennines mountain range, in Calabria and Central Italy. Dipping faults and (sub-) vertical faults are parameterized as dipping plane and line sources, respectively; 3. The Upper and Lower Rhine Graben regime that runs from northern Italy into eastern Belgium, parameterized as a combination of dipping plane and line sources, and finally 4. Background seismicity, parameterized as area sources. The fault model is based on slip rates using characteristic recurrence. The modeling of background and subduction zone seismicity is based on a compilation of several national and regional historic seismic catalogs using a Gutenberg-Richter recurrence model. Merging the catalogs encompasses the deletion of double, fake and very old events and the application of a declustering algorithm (Reasenberg, 2000). The resulting catalog contains a little over 6000 events, has an average b-value of -0.9, is complete for moment magnitudes 4.5 and larger, and is used to compute a gridded a-value model (smoothed historical seismicity) for the region. The logic tree weighs various completeness intervals and minimum magnitudes. Using a weighted scheme of European and global ground motion models together with a detailed site classification map for Europe based on Eurocode 8, we generate hazard maps for recurrence periods of 200, 475, 1000 and 2500 yrs.

  5. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    NASA Astrophysics Data System (ADS)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  6. A GIS-based time-dependent seismic source modeling of Northern Iran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  7. Analysis and Simulation of Far-Field Seismic Data from the Source Physics Experiment

    DTIC Science & Technology

    2012-09-01

    ANALYSIS AND SIMULATION OF FAR-FIELD SEISMIC DATA FROM THE SOURCE PHYSICS EXPERIMENT Arben Pitarka, Robert J. Mellors, Arthur J. Rodgers, Sean...Security Site (NNSS) provides new data for investigating the excitation and propagation of seismic waves generated by buried explosions. A particular... seismic model. The 3D seismic model includes surface topography. It is based on regional geological data, with material properties constrained by shallow

  8. Updating the USGS seismic hazard maps for Alaska

    USGS Publications Warehouse

    Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.

    2015-01-01

    The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.

  9. Seismic source models for very-long period seismic signals on White Island, New Zealand

    NASA Astrophysics Data System (ADS)

    Jiwani-Brown, Elliot; Neuberg, Jurgen; Jolly, Art

    2015-04-01

    Very-long-period seismic signals (VLP) from White Island have a duration of only a few tens of seconds and a waveform that indicates an elastic (or viscoelastic) interaction of a source region with the surrounding medium; unlike VLP signals on some other volcanoes that indicate a step function recorded in the near field of the seismic source, White Island VLPs exhibit a Ricker waveform. We explore a set of isotropic, seismic source models based on the interaction between magma and water/brine in direct contact. Seismic amplitude measurements are taken into account to estimate the volume changes at depth that can produce the observed displacement at the surface. Furthermore, the influence of different fluid types are explored.

  10. InSAR Surface Deformation and Source Modelling at Semisopochnoi Island During the 2014 and 2015 Seismic Swarms with Constraints from Geochemical and Seismic Analysis

    NASA Astrophysics Data System (ADS)

    DeGrandpre, K.; Pesicek, J. D.; Lu, Z.

    2017-12-01

    During the summer of 2014 and the early spring of 2015 two notable increases in seismic activity at Semisopochnoi Island in the western Aleutian islands were recorded on AVO seismometers on Semisopochnoi and neighboring islands. These seismic swarms did not lead to an eruption. This study employs interferometric synthetic aperture radar (InSAR) techniques using TerraSAR-X images in conjunction with more accurately relocating the recorded seismic events through simultaneous inversion of event travel times and a three-dimensional velocity model using tomoDD. The InSAR images exhibit surprising coherence and an island wide spatial distribution of inflation that is then used in Mogi, Okada, spheroid, and ellipsoid source models in order to define the three-dimensional location and volume change required for a source at the volcano to produce the observed surface deformation. The tomoDD relocations provide a more accurate and realistic three-dimensional velocity model as well as a tighter clustering of events for both swarms that clearly outline a linear seismic void within the larger group of shallow (<10 km) seismicity. The source models are fit to this void and pressure estimates from geochemical analysis are used to verify the storage depth of magmas at Semisopochnoi. Comparisons of calculated source cavity, magma injection, and surface deformation volumes are made in order to assess the reality behind the various modelling estimates. Incorporating geochemical and seismic data to provide constraints on surface deformation source inversions provides an interdisciplinary approach that can be used to make more accurate interpretations of dynamic observations.

  11. Seismic Moment, Seismic Energy, and Source Duration of Slow Earthquakes: Application of Brownian slow earthquake model to three major subduction zones

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Maury, Julie

    2018-04-01

    Tectonic tremors, low-frequency earthquakes, very low-frequency earthquakes, and slow slip events are all regarded as components of broadband slow earthquakes, which can be modeled as a stochastic process using Brownian motion. Here we show that the Brownian slow earthquake model provides theoretical relationships among the seismic moment, seismic energy, and source duration of slow earthquakes and that this model explains various estimates of these quantities in three major subduction zones: Japan, Cascadia, and Mexico. While the estimates for these three regions are similar at the seismological frequencies, the seismic moment rates are significantly different in the geodetic observation. This difference is ascribed to the difference in the characteristic times of the Brownian slow earthquake model, which is controlled by the width of the source area. We also show that the model can include non-Gaussian fluctuations, which better explains recent findings of a near-constant source duration for low-frequency earthquake families.

  12. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.

    2013-12-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  13. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  14. Added-value joint source modelling of seismic and geodetic data

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.

  15. Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; Huang, Lianjie

    2015-01-28

    Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less

  16. Inverse and Forward Modeling of The 2014 Iquique Earthquake with Run-up Data

    NASA Astrophysics Data System (ADS)

    Fuentes, M.

    2015-12-01

    The April 1, 2014 Mw 8.2 Iquique earthquake excited a moderate tsunami which turned on the national alert of tsunami threat. This earthquake was located in the well-known seismic gap in northern Chile which had a high seismic potential (~ Mw 9.0) after the two main large historic events of 1868 and 1877. Nonetheless, studies of the seismic source performed with seismic data inversions suggest that the event exhibited a main patch located around 19.8° S at 40 km of depth with a seismic moment equivalent to Mw = 8.2. Thus, a large seismic deficit remains in the gap being capable to release an event of Mw = 8.8-8.9. To understand the importance of the tsunami threat in this zone, a seismic source modeling of the Iquique Earthquake is performed. A new approach based on stochastic k2 seismic sources is presented. A set of those sources is generated and for each one, a full numerical tsunami model is performed in order to obtain the run-up heights along the coastline. The results are compared with the available field run-up measurements and with the tide gauges that registered the signal. The comparison is not uniform; it penalizes more when the discrepancies are larger close to the peak run-up location. This criterion allows to identify the best seismic source from the set of scenarios that explains better the observations from a statistical point of view. By the other hand, a L2 norm minimization is used to invert the seismic source by comparing the peak nearshore tsunami amplitude (PNTA) with the run-up observations. This method searches in a space of solutions the best seismic configuration by retrieving the Green's function coefficients in order to explain the field measurements. The results obtained confirm that a concentrated down-dip patch slip adequately models the run-up data.

  17. Seismic source inversion using Green's reciprocity and a 3-D structural model for the Japanese Islands

    NASA Astrophysics Data System (ADS)

    Simutė, S.; Fichtner, A.

    2015-12-01

    We present a feasibility study for seismic source inversions using a 3-D velocity model for the Japanese Islands. The approach involves numerically calculating 3-D Green's tensors, which is made efficient by exploiting Green's reciprocity. The rationale for 3-D seismic source inversion has several aspects. For structurally complex regions, such as the Japan area, it is necessary to account for 3-D Earth heterogeneities to prevent unknown structure polluting source solutions. In addition, earthquake source characterisation can serve as a means to delineate existing faults. Source parameters obtained for more realistic Earth models can then facilitate improvements in seismic tomography and early warning systems, which are particularly important for seismically active areas, such as Japan. We have created a database of numerically computed 3-D Green's reciprocals for a 40°× 40°× 600 km size area around the Japanese Archipelago for >150 broadband stations. For this we used a regional 3-D velocity model, recently obtained from full waveform inversion. The model includes attenuation and radial anisotropy and explains seismic waveform data for periods between 10 - 80 s generally well. The aim is to perform source inversions using the database of 3-D Green's tensors. As preliminary steps, we present initial concepts to address issues that are at the basis of our approach. We first investigate to which extent Green's reciprocity works in a discrete domain. Considering substantial amounts of computed Green's tensors we address storage requirements and file formatting. We discuss the importance of the initial source model, as an intelligent choice can substantially reduce the search volume. Possibilities to perform a Bayesian inversion and ways to move to finite source inversion are also explored.

  18. Recent Impacts on Mars: Cluster Properties and Seismic Signal Predictions

    NASA Astrophysics Data System (ADS)

    Justine Daubar, Ingrid; Schmerr, Nicholas; Banks, Maria; Marusiak, Angela; Golombek, Matthew P.

    2016-10-01

    Impacts are a key source of seismic waves that are a primary constraint on the formation, evolution, and dynamics of planetary objects. Geophysical missions such as InSight (Banerdt et al., 2013) will monitor seismic signals from internal and external sources. New martian craters have been identified in orbital images (Malin et al., 2006; Daubar et al., 2013). Seismically detecting such impacts and subsequently imaging the resulting craters will provide extremely accurate epicenters and source crater sizes, enabling calibration of seismic velocities, the efficiency of impact-seismic coupling, and retrieval of detailed regional and local internal structure.To investigate recent impact-induced seismicity on Mars, we have assessed ~100 new, dated impact sites. In approximately half of new impacts, the bolide partially disintegrates in the atmosphere, forming multiple craters in a cluster. We incorporate the resulting, more complex, seismic effects in our model. To characterize the variation between sites, we focus on clustered impacts. We report statistics of craters within clusters: diameters, morphometry indicating subsurface layering, strewn-field azimuths indicating impact direction, and dispersion within clusters indicating combined effects of bolide strength and elevation of breakup.Measured parameters are converted to seismic predictions for impact sources using a scaling law relating crater diameter to the momentum and source duration, calibrated for impacts recorded by Apollo (Lognonne et al., 2009). We use plausible ranges for target properties, bolide densities, and impact velocities to bound the seismic moment. The expected seismic sources are modeled in the near field using a 3-D wave propagation code (Petersson et al., 2010) and in the far field using a 1-D wave propagation code (Friederich et al., 1995), for a martian seismic model. Thus we calculate the amplitudes of seismic phases at varying distances, which can be used to evaluate the detectability of body and surface wave phases created by different sizes and types of impacts all over Mars.

  19. Interpreting intraplate tectonics for seismic hazard: a UK historical perspective

    NASA Astrophysics Data System (ADS)

    Musson, R. M. W.

    2012-04-01

    It is notoriously difficult to construct seismic source models for probabilistic seismic hazard assessment in intraplate areas on the basis of geological information, and many practitioners have given up the task in favour of purely seismicity-based models. This risks losing potentially valuable information in regions where the earthquake catalogue is short compared to the seismic cycle. It is interesting to survey how attitudes to this issue have evolved over the past 30 years. This paper takes the UK as an example, and traces the evolution of seismic source models through generations of hazard studies. It is found that in the UK, while the earliest studies did not consider regional tectonics in any way, there has been a gradual evolution towards more tectonically based models. Experience in other countries, of course, may differ.

  20. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    NASA Astrophysics Data System (ADS)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  1. Effect of time dependence on probabilistic seismic-hazard maps and deaggregation for the central Apennines, Italy

    USGS Publications Warehouse

    Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.; Perkins, D.

    2009-01-01

    We produce probabilistic seismic-hazard assessments for the central Apennines, Italy, using time-dependent models that are characterized using a Brownian passage time recurrence model. Using aperiodicity parameters, ?? of 0.3, 0.5, and 0.7, we examine the sensitivity of the probabilistic ground motion and its deaggregation to these parameters. For the seismic source model we incorporate both smoothed historical seismicity over the area and geological information on faults. We use the maximum magnitude model for the fault sources together with a uniform probability of rupture along the fault (floating fault model) to model fictitious faults to account for earthquakes that cannot be correlated with known geologic structural segmentation.

  2. Exploring the Differences Between the European (SHARE) and the Reference Italian Seismic Hazard Models

    NASA Astrophysics Data System (ADS)

    Visini, F.; Meletti, C.; D'Amico, V.; Rovida, A.; Stucchi, M.

    2014-12-01

    The recent release of the probabilistic seismic hazard assessment (PSHA) model for Europe by the SHARE project (Giardini et al., 2013, www.share-eu.org) arises questions about the comparison between its results for Italy and the official Italian seismic hazard model (MPS04; Stucchi et al., 2011) adopted by the building code. The goal of such a comparison is identifying the main input elements that produce the differences between the two models. It is worthwhile to remark that each PSHA is realized with data and knowledge available at the time of the release. Therefore, even if a new model provides estimates significantly different from the previous ones that does not mean that old models are wrong, but probably that the current knowledge is strongly changed and improved. Looking at the hazard maps with 10% probability of exceedance in 50 years (adopted as the standard input in the Italian building code), the SHARE model shows increased expected values with respect to the MPS04 model, up to 70% for PGA. However, looking in detail at all output parameters of both the models, we observe a different behaviour for other spectral accelerations. In fact, for spectral periods greater than 0.3 s, the current reference PSHA for Italy proposes higher values than the SHARE model for many and large areas. This observation suggests that this behaviour could not be due to a different definition of seismic sources and relevant seismicity rates; it mainly seems the result of the adoption of recent ground-motion prediction equations (GMPEs) that estimate higher values for PGA and for accelerations with periods lower than 0.3 s and lower values for higher periods with respect to old GMPEs. Another important set of tests consisted in analysing separately the PSHA results obtained by the three source models adopted in SHARE (i.e., area sources, fault sources with background, and a refined smoothed seismicity model), whereas MPS04 only uses area sources. Results seem to confirm the strong impact of the new generation GMPEs on the seismic hazard estimates. Giardini D. et al., 2013. Seismic Hazard Harmonization in Europe (SHARE): Online Data Resource, doi:10.12686/SED-00000001-SHARE. Stucchi M. et al., 2011. Seismic Hazard Assessment (2003-2009) for the Italian Building Code. Bull. Seismol. Soc. Am. 101, 1885-1911.

  3. Velocity Model Using the Large-N Seismic Array from the Source Physics Experiment (SPE)

    NASA Astrophysics Data System (ADS)

    Chen, T.; Snelson, C. M.

    2016-12-01

    The Source Physics Experiment (SPE) is a multi-institutional, multi-disciplinary project that consists of a series of chemical explosions conducted at the Nevada National Security Site (NNSS). The goal of SPE is to understand the complicated effect of geological structures on seismic wave propagation and source energy partitioning, develop and validate physics-based modeling, and ultimately better monitor low-yield nuclear explosions. A Large-N seismic array was deployed at the SPE site to image the full 3D wavefield from the most recent SPE-5 explosion on April 26, 2016. The Large-N seismic array consists of 996 geophones (half three-component and half vertical-component sensors), and operated for one month, recording the SPE-5 shot, ambient noise, and additional controlled-sources (a large hammer). This study uses Large-N array recordings of the SPE-5 chemical explosion to develop high resolution images of local geologic structures. We analyze different phases of recorded seismic data and construct a velocity model based on arrival times. The results of this study will be incorporated into the large modeling and simulation efforts as ground-truth further validating the models.

  4. Impact from Magnitude-Rupture Length Uncertainty on Seismic Hazard and Risk

    NASA Astrophysics Data System (ADS)

    Apel, E. V.; Nyst, M.; Kane, D. L.

    2015-12-01

    In probabilistic seismic hazard and risk assessments seismic sources are typically divided into two groups: fault sources (to model known faults) and background sources (to model unknown faults). In areas like the Central and Eastern United States and Hawaii the hazard and risk is driven primarily by background sources. Background sources can be modeled as areas, points or pseudo-faults. When background sources are modeled as pseudo-faults, magnitude-length or magnitude-area scaling relationships are required to construct these pseudo-faults. However the uncertainty associated with these relationships is often ignored or discarded in hazard and risk models, particularly when faults sources are the dominant contributor. Conversely, in areas modeled only with background sources these uncertainties are much more significant. In this study we test the impact of using various relationships and the resulting epistemic uncertainties on the seismic hazard and risk in the Central and Eastern United States and Hawaii. It is common to use only one magnitude length relationship when calculating hazard. However, Stirling et al. (2013) showed that for a given suite of magnitude-rupture length relationships the variability can be quite large. The 2014 US National Seismic Hazard Maps (Petersen et al., 2014) used one magnitude-rupture length relationship (Somerville, et al., 2001) in the Central and Eastern United States, and did not consider variability in the seismogenic rupture plane width. Here we use a suite of metrics to compare the USGS approach with these variable uncertainty models to assess 1) the impact on hazard and risk and 2) the epistemic uncertainty associated with choice of relationship. In areas where the seismic hazard is dominated by larger crustal faults (e.g. New Madrid) the choice of magnitude-rupture length relationship has little impact on the hazard or risk. However away from these regions, the choice of relationship is more significant and may approach the size of the uncertainty associated with the ground motion prediction equation suite.

  5. Time-Independent Annual Seismic Rates, Based on Faults and Smoothed Seismicity, Computed for Seismic Hazard Assessment in Italy

    NASA Astrophysics Data System (ADS)

    Murru, M.; Falcone, G.; Taroni, M.; Console, R.

    2017-12-01

    In 2015 the Italian Department of Civil Protection, started a project for upgrading the official Italian seismic hazard map (MPS04) inviting the Italian scientific community to participate in a joint effort for its realization. We participated providing spatially variable time-independent (Poisson) long-term annual occurrence rates of seismic events on the entire Italian territory, considering cells of 0.1°x0.1° from M4.5 up to M8.1 for magnitude bin of 0.1 units. Our final model was composed by two different models, merged in one ensemble model, each one with the same weight: the first one was realized by a smoothed seismicity approach, the second one using the seismogenic faults. The spatial smoothed seismicity was obtained using the smoothing method introduced by Frankel (1995) applied to the historical and instrumental seismicity. In this approach we adopted a tapered Gutenberg-Richter relation with a b-value fixed to 1 and a corner magnitude estimated with the bigger events in the catalogs. For each seismogenic fault provided by the Database of the Individual Seismogenic Sources (DISS), we computed the annual rate (for each cells of 0.1°x0.1°) for magnitude bin of 0.1 units, assuming that the seismic moments of the earthquakes generated by each fault are distributed according to the same tapered Gutenberg-Richter relation of the smoothed seismicity model. The annual rate for the final model was determined in the following way: if the cell falls within one of the seismic sources, we merge the respective value of rate determined by the seismic moments of the earthquakes generated by each fault and the value of the smoothed seismicity model with the same weight; if instead the cells fall outside of any seismic source we considered the rate obtained from the spatial smoothed seismicity. Here we present the final results of our study to be used for the new Italian seismic hazard map.

  6. Seismic hazard in the Nation's breadbasket

    USGS Publications Warehouse

    Boyd, Oliver; Haller, Kathleen; Luco, Nicolas; Moschetti, Morgan P.; Mueller, Charles; Petersen, Mark D.; Rezaeian, Sanaz; Rubinstein, Justin L.

    2015-01-01

    The USGS National Seismic Hazard Maps were updated in 2014 and included several important changes for the central United States (CUS). Background seismicity sources were improved using a new moment-magnitude-based catalog; a new adaptive, nearest-neighbor smoothing kernel was implemented; and maximum magnitudes for background sources were updated. Areal source zones developed by the Central and Eastern United States Seismic Source Characterization for Nuclear Facilities project were simplified and adopted. The weighting scheme for ground motion models was updated, giving more weight to models with a faster attenuation with distance compared to the previous maps. Overall, hazard changes (2% probability of exceedance in 50 years, across a range of ground-motion frequencies) were smaller than 10% in most of the CUS relative to the 2008 USGS maps despite new ground motion models and their assigned logic tree weights that reduced the probabilistic ground motions by 5–20%.

  7. Planar seismic source characterization models developed for probabilistic seismic hazard assessment of Istanbul

    NASA Astrophysics Data System (ADS)

    Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin

    2017-12-01

    This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.

  8. Modeling the Excitation of Seismic Waves by the Joplin Tornado

    NASA Astrophysics Data System (ADS)

    Valovcin, Anne; Tanimoto, Toshiro

    2017-10-01

    Tornadoes generate seismic signals when they contact the ground. Here we examine the signals excited by the Joplin tornado, which passed within 2 km of a station in the Earthscope Transportable Array. We model the tornado-generated vertical seismic signal at low frequencies (0.01-0.03 Hz) and solve for the strength of the seismic source. The resulting source amplitude is largest when the tornado was reported to be strongest (EF 4-5), and the amplitude is smallest when the tornado was weak (EF 0-2). A further understanding of the relationship between source amplitude and tornado intensity could open up new ways to study tornadoes from the ground.

  9. Using Seismic and Infrasonic Data to Identify Persistent Sources

    NASA Astrophysics Data System (ADS)

    Nava, S.; Brogan, R.

    2014-12-01

    Data from seismic and infrasound sensors were combined to aid in the identification of persistent sources such as mining-related explosions. It is of interest to operators of seismic networks to identify these signals in their event catalogs. Acoustic signals below the threshold of human hearing, in the frequency range of ~0.01 to 20 Hz are classified as infrasound. Persistent signal sources are useful as ground truth data for the study of atmospheric infrasound signal propagation, identification of manmade versus naturally occurring seismic sources, and other studies. By using signals emanating from the same location, propagation studies, for example, can be conducted using a variety of atmospheric conditions, leading to improvements to the modeling process for eventual use where the source is not known. We present results from several studies to identify ground truth sources using both seismic and infrasound data.

  10. Determining the seismic source mechanism and location for an explosive eruption with limited observational data: Augustine Volcano, Alaska

    NASA Astrophysics Data System (ADS)

    Dawson, Phillip B.; Chouet, Bernard A.; Power, John

    2011-02-01

    Waveform inversions of the very-long-period components of the seismic wavefield produced by an explosive eruption that occurred on 11 January, 2006 at Augustine Volcano, Alaska constrain the seismic source location to near sea level beneath the summit of the volcano. The calculated moment tensors indicate the presence of a volumetric source mechanism. Systematic reconstruction of the source mechanism shows the source consists of a sill intersected by either a sub-vertical east-west trending dike or a sub-vertical pipe and a weak single force. The trend of the dike may be controlled by the east-west trending Augustine-Seldovia arch. The data from the network of broadband sensors is limited to fourteen seismic traces, and synthetic modeling confirms the ability of the network to recover the source mechanism. The synthetic modeling also provides a guide to the expected capability of a broadband network to resolve very-long-period source mechanisms, particularly when confronted with limited observational data.

  11. Large Subduction Earthquake Simulations using Finite Source Modeling and the Offshore-Onshore Ambient Seismic Field

    NASA Astrophysics Data System (ADS)

    Viens, L.; Miyake, H.; Koketsu, K.

    2016-12-01

    Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.

  12. A unified approach to fluid-flow, geomechanical, and seismic modelling

    NASA Astrophysics Data System (ADS)

    Yarushina, Viktoriya; Minakov, Alexander

    2016-04-01

    The perturbations of pore pressure can generate seismicity. This is supported by observations from human activities that involve fluid injection into rocks at high pressure (hydraulic fracturing, CO2 storage, geothermal energy production) and natural examples such as volcanic earthquakes. Although the seismic signals that emerge during geotechnical operations are small both in amplitude and duration when compared to natural counterparts. A possible explanation for the earthquake source mechanism is based on a number of in situ stress measurements suggesting that the crustal rocks are close to its plastic yield limit. Hence, a rapid increase of the pore pressure decreases the effective normal stress, and, thus, can trigger seismic shear deformation. At the same time, little attention has been paid to the fact that the perturbation of fluid pressure itself represents an acoustic source. Moreover, non-double-couple source mechanisms are frequently reported from the analysis of microseismicity. A consistent formulation of the source mechanism describing microseismic events should include both a shear and isotropic component. Thus, improved understanding of the interaction between fluid flow and seismic deformation is needed. With this study we aim to increase the competence in integrating real-time microseismic monitoring with geomechanical modelling such that there is a feedback loop between monitored deformation and stress field modelling. We propose fully integrated seismic, geomechanical and reservoir modelling. Our mathematical formulation is based on fundamental set of force balance, mass balance, and constitutive poro-elastoplastic equations for two-phase media consisting of deformable solid rock frame and viscous fluid. We consider a simplified 1D modelling setup for consistent acoustic source and wave propagation in poro-elastoplastic media. In this formulation the seismic wave is generated due to local changes of the stress field and pore pressure induced by e.g. fault generation or strain localization. This approach gives unified framework to characterize microseismicity of both class-I (pressure induced) and class-II (stress triggered) type of events. We consider two modelling setups. In the first setup the event is located within the reservoir and associated with pressure/stress drop due to fracture initiation. In the second setup we assume that seismic wave from a distant source hits a reservoir. The unified formulation of poro-elastoplastic deformation allows us to link the macroscopic stresses to local seismic instability.

  13. Shallow seismicity in volcanic system: what role does the edifice play?

    NASA Astrophysics Data System (ADS)

    Bean, Chris; Lokmer, Ivan

    2017-04-01

    Seismicity in the upper two kilometres in volcanic systems is complex and very diverse in nature. The origins lie in the multi-physics nature of source processes and in the often extreme heterogeneity in near surface structure, which introduces strong seismic wave propagation path effects that often 'hide' the source itself. Other complicating factors are that we are often in the seismic near-field so waveforms can be intrinsically more complex than in far-field earthquake seismology. The traditional focus for an explanation of the diverse nature of shallow seismic signals is to call on the direct action of fluids in the system. Fits to model data are then used to elucidate properties of the plumbing system. Here we show that solutions based on these conceptual models are not unique and that models based on a diverse range of quasi-brittle failure of low stiffness near surface structures are equally valid from a data fit perspective. These earthquake-like sources also explain aspects of edifice deformation that are as yet poorly quantified.

  14. The trigger mechanism of low-frequency earthquakes on Montserrat

    NASA Astrophysics Data System (ADS)

    Neuberg, J. W.; Tuffen, H.; Collier, L.; Green, D.; Powell, T.; Dingwell, D.

    2006-05-01

    A careful analysis of low-frequency seismic events on Soufrièere Hills volcano, Montserrat, points to a source mechanism that is non-destructive, repetitive, and has a stationary source location. By combining these seismological clues with new field evidence and numerical magma flow modelling, we propose a seismic trigger model which is based on brittle failure of magma in the glass transition. Loss of heat and gas from the magma results in a strong viscosity gradient across a dyke or conduit. This leads to a build-up of shear stress near the conduit wall where magma can rupture in a brittle manner, as field evidence from a rhyolitic dyke demonstrates. This brittle failure provides seismic energy, the majority of which is trapped in the conduit or dyke forming the low-frequency coda of the observed seismic signal. The trigger source location marks the transition from ductile conduit flow to friction-controlled magma ascent. As the trigger mechanism is governed by the depth-dependent magma parameters, the source location remains fixed at a depth where the conditions allow brittle failure. This is reflected in the fixed seismic source locations.

  15. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.

  16. Seismic hazard in the eastern United States

    USGS Publications Warehouse

    Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison

    2015-01-01

    The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.

  17. A spatio-temporal model for probabilistic seismic hazard zonation of Tehran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2013-08-01

    A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.

  18. Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations

    NASA Astrophysics Data System (ADS)

    Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang

    2017-09-01

    Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.

  19. Seismic velocity uncertainties and their effect on geothermal predictions: A case study

    NASA Astrophysics Data System (ADS)

    Rabbel, Wolfgang; Köhn, Daniel; Bahadur Motra, Hem; Niederau, Jan; Thorwart, Martin; Wuttke, Frank; Descramble Working Group

    2017-04-01

    Geothermal exploration relies in large parts on geophysical subsurface models derived from seismic reflection profiling. These models are the framework of hydro-geothermal modeling, which further requires estimating thermal and hydraulic parameters to be attributed to the seismic strata. All petrophysical and structural properties involved in this process can be determined only with limited accuracy and thus impose uncertainties onto the resulting model predictions of temperature-depth profiles and hydraulic flow, too. In the present study we analyze sources and effects of uncertainties of the seismic velocity field, which translate directly into depth uncertainties of the hydraulically and thermally relevant horizons. Geological sources of these uncertainties are subsurface heterogeneity and seismic anisotropy, methodical sources are limitations in spread length and physical resolution. We demonstrate these effects using data of the EU-Horizon 2020 project DESCRAMBLE investigating a shallow super-critical geothermal reservoir in the Larderello area. The study is based on 2D- and 3D seismic reflection data and laboratory measurements on representative rock samples under simulated in-situ conditions. The rock samples consistently show P-wave anisotropy values of 10-20% order of magnitude. However, the uncertainty of layer depths induced by anisotropy is likely to be lower depending on the accuracy, with which the spatial orientation of bedding planes can be determined from the seismic reflection images.

  20. A Program for Calculating and Plotting Synthetic Common-Source Seismic-Reflection Traces for Multilayered Earth Models.

    ERIC Educational Resources Information Center

    Ramananantoandro, Ramanantsoa

    1988-01-01

    Presented is a description of a BASIC program to be used on an IBM microcomputer for calculating and plotting synthetic seismic-reflection traces for multilayered earth models. Discusses finding raypaths for given source-receiver offsets using the "shooting method" and calculating the corresponding travel times. (Author/CW)

  1. The effect of Earth's oblateness on the seismic moment estimation from satellite gravimetry

    NASA Astrophysics Data System (ADS)

    Dai, Chunli; Guo, Junyi; Shang, Kun; Shum, C. K.; Wang, Rongjiang

    2018-05-01

    Over the last decade, satellite gravimetry, as a new class of geodetic sensors, has been increasingly studied for its use in improving source model inversion for large undersea earthquakes. When these satellite-observed gravity change data are used to estimate source parameters such as seismic moment, the forward modelling of earthquake seismic deformation is crucial because imperfect modelling could lead to errors in the resolved source parameters. Here, we discuss several modelling issues and focus on one modelling deficiency resulting from the upward continuation of gravity change considering the Earth's oblateness, which is ignored in contemporary studies. For the low degree (degree 60) time-variable gravity solutions from Gravity Recovery and Climate Experiment mission data, the model-predicted gravity change would be overestimated by 9 per cent for the 2011 Tohoku earthquake, and about 6 per cent for the 2010 Maule earthquake. For high degree gravity solutions, the model-predicted gravity change at degree 240 would be overestimated by 30 per cent for the 2011 Tohoku earthquake, resulting in the seismic moment to be systematically underestimated by 30 per cent.

  2. Classifying elephant behaviour through seismic vibrations.

    PubMed

    Mortimer, Beth; Rees, William Lake; Koelemeijer, Paula; Nissen-Meyer, Tarje

    2018-05-07

    Seismic waves - vibrations within and along the Earth's surface - are ubiquitous sources of information. During propagation, physical factors can obscure information transfer via vibrations and influence propagation range [1]. Here, we explore how terrain type and background seismic noise influence the propagation of seismic vibrations generated by African elephants. In Kenya, we recorded the ground-based vibrations of different wild elephant behaviours, such as locomotion and infrasonic vocalisations [2], as well as natural and anthropogenic seismic noise. We employed techniques from seismology to transform the geophone recordings into source functions - the time-varying seismic signature generated at the source. We used computer modelling to constrain the propagation ranges of elephant seismic vibrations for different terrains and noise levels. Behaviours that generate a high force on a sandy terrain with low noise propagate the furthest, over the kilometre scale. Our modelling also predicts that specific elephant behaviours can be distinguished and monitored over a range of propagation distances and noise levels. We conclude that seismic cues have considerable potential for both behavioural classification and remote monitoring of wildlife. In particular, classifying the seismic signatures of specific behaviours of large mammals remotely in real time, such as elephant running, could inform on poaching threats. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Updated Colombian Seismic Hazard Map

    NASA Astrophysics Data System (ADS)

    Eraso, J.; Arcila, M.; Romero, J.; Dimate, C.; Bermúdez, M. L.; Alvarado, C.

    2013-05-01

    The Colombian seismic hazard map used by the National Building Code (NSR-98) in effect until 2009 was developed in 1996. Since then, the National Seismological Network of Colombia has improved in both coverage and technology providing fifteen years of additional seismic records. These improvements have allowed a better understanding of the regional geology and tectonics which in addition to the seismic activity in Colombia with destructive effects has motivated the interest and the need to develop a new seismic hazard assessment in this country. Taking advantage of new instrumental information sources such as new broad band stations of the National Seismological Network, new historical seismicity data, standardized global databases availability, and in general, of advances in models and techniques, a new Colombian seismic hazard map was developed. A PSHA model was applied. The use of the PSHA model is because it incorporates the effects of all seismic sources that may affect a particular site solving the uncertainties caused by the parameters and assumptions defined in this kind of studies. First, the seismic sources geometry and a complete and homogeneous seismic catalog were defined; the parameters of seismic rate of each one of the seismic sources occurrence were calculated establishing a national seismotectonic model. Several of attenuation-distance relationships were selected depending on the type of seismicity considered. The seismic hazard was estimated using the CRISIS2007 software created by the Engineering Institute of the Universidad Nacional Autónoma de México -UNAM (National Autonomous University of Mexico). A uniformly spaced grid each 0.1° was used to calculate the peak ground acceleration (PGA) and response spectral values at 0.1, 0.2, 0.3, 0.5, 0.75, 1, 1.5, 2, 2.5 and 3.0 seconds with return periods of 75, 225, 475, 975 and 2475 years. For each site, a uniform hazard spectrum and exceedance rate curves were calculated. With the results, it is possible to determinate environments and scenarios where the seismic hazard is a function of distance and magnitude and also the principal seismic sources that contribute to the seismic hazard at each site (dissagregation). This project was conducted by the Servicio Geológico Colombiano (Colombian Geological Survey) and the Universidad Nacional de Colombia (National University of Colombia), with the collaboration of national and foreign experts and the National System of Prevention and Attention of Disaster (SNPAD). It is important to stand out that this new seismic hazard map was used in the updated national building code (NSR-10). A new process is ongoing in order to improve and present the Seismic Hazard Map in terms of intensity. This require new knowledge in site effects, in both local and regional scales, checking the existing and develop new acceleration to intensity relationships, in order to obtain results more understandable and useful for a wider range of users, not only in the engineering field, but also all the risk assessment and management institutions, research and general community.

  4. Application of a Visco-Plastic Continuum Model to the Modeling of Near-Source Phenomenology and its Implications on Close-In Seismic Observables

    NASA Astrophysics Data System (ADS)

    Rougier, E.; Knight, E. E.

    2015-12-01

    The Source Physics Experiments (SPE) is a project funded by the U.S. Department of Energy at the National Nuclear Security Site. The project consists of a series of underground explosive tests designed to gain more insight on the generation and propagation of seismic energy from underground explosions in hard rock media, granite. Until now, four tests (SPE-1, SPE-2, SPE-3 and SPE-4Prime) with yields ranging from 87 kg to 1000 kg have been conducted in the same borehole. The generation and propagation of seismic waves is heavily influenced by the different damage mechanisms occurring at different ranges from the explosive source. These damage mechanisms include pore crushing, compressive (shear) damage, joint damage, spallation and fracture and fragmentation, etc. Understanding these mechanisms and how they interact with each other is essential to the interpretation of the characteristics of close-in seismic observables. Recent observations demonstrate that, for relatively small and shallow chemical explosions in granite, such as SPE-1, -2 and -3, the formation of a cavity around the working point is not the main mechanism responsible for the release of seismic moment. Shear dilatancy (bulking occurring as a consequence of compressive damage) of the medium around the source has been proposed as an alternative damage mechanism that explains the seismic moment release observed in the experiments. In this work, the interaction between cavity formation and bulking is investigated via a series of computer simulations for the SPE-2 event. The simulations are conducted using a newly developed material model, called AZ_Frac. AZ_Frac is a continuum-based-visco-plastic strain-rate-dependent material model. One of its key features is its ability to describe continuum fracture processes, while properly handling anisotropic material characteristics. The implications of the near source numerical results on the close-in seismic quantities, such as reduced displacement potentials and source spectra are presented.

  5. Numerical modeling of landslides and generated seismic waves: The Bingham Canyon Mine landslides

    NASA Astrophysics Data System (ADS)

    Miallot, H.; Mangeney, A.; Capdeville, Y.; Hibert, C.

    2016-12-01

    Landslides are important natural hazards and key erosion processes. They create long period surface waves that can be recorded by regional and global seismic networks. The seismic signals are generated by acceleration/deceleration of the mass sliding over the topography. They consist in a unique and powerful tool to detect, characterize and quantify the landslide dynamics. We investigate here the processes at work during the two massive landslides that struck the Bingham Canyon Mine on the 10th April 2013. We carry a combined analysis of the generated seismic signals and the landslide processes computed with a 3D modeling on a complex topography. Forces computed by broadband seismic waveform inversion are used to constrain the study and particularly the force-source and the bulk dynamic. The source time function are obtained by a 3D model (Shaltop) where rheological parameters can be adjusted. We first investigate the influence of the initial shape of the sliding mass which strongly affects the whole landslide dynamic. We also see that the initial shape of the source mass of the first landslide constrains pretty well the second landslide source mass. We then investigate the effect of a rheological parameter, the frictional angle, that strongly influences the resulted computed seismic source function. We test here numerous friction laws as the frictional Coulomb law and a velocity-weakening friction law. Our results show that the force waveform fitting the observed data is highly variable depending on these different choices.

  6. Borehole seismic monitoring of seismic stimulation at OccidentalPermian Ltd's -- South Wason Clear Fork Unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daley, Tom; Majer, Ernie

    2007-04-30

    Seismic stimulation is a proposed enhanced oil recovery(EOR) technique which uses seismic energy to increase oil production. Aspart of an integrated research effort (theory, lab and field studies),LBNL has been measuring the seismic amplitude of various stimulationsources in various oil fields (Majer, et al., 2006, Roberts,et al.,2001, Daley et al., 1999). The amplitude of the seismic waves generatedby a stimulation source is an important parameter for increased oilmobility in both theoretical models and laboratory core studies. Theseismic amplitude, typically in units of seismic strain, can be measuredin-situ by use of a borehole seismometer (geophone). Measuring thedistribution of amplitudes within amore » reservoir could allow improved designof stimulation source deployment. In March, 2007, we provided in-fieldmonitoring of two stimulation sources operating in Occidental (Oxy)Permian Ltd's South Wasson Clear Fork (SWCU) unit, located near DenverCity, Tx. The stimulation source is a downhole fluid pulsation devicedeveloped by Applied Seismic Research Corp. (ASR). Our monitoring used aborehole wall-locking 3-component geophone operating in two nearbywells.« less

  7. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevin J. Coppersmith; Lawrence A. Salomone; Chris W. Fuller

    2012-01-31

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to representmore » the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts. The model will be used to assess the present-day composite distribution for seismic sources along with their characterization in the CEUS and uncertainty. In addition, this model is in a form suitable for use in PSHA evaluations for regulatory activities, such as Early Site Permit (ESPs) and Combined Operating License Applications (COLAs). Applications, Values, and Use Development of a regional CEUS seismic source model will provide value to those who (1) have submitted an ESP or COLA for Nuclear Regulatory Commission (NRC) review before 2011; (2) will submit an ESP or COLA for NRC review after 2011; (3) must respond to safety issues resulting from NRC Generic Issue 199 (GI-199) for existing plants and (4) will prepare PSHAs to meet design and periodic review requirements for current and future nuclear facilities. This work replaces a previous study performed approximately 25 years ago. Since that study was completed, substantial work has been done to improve the understanding of seismic sources and their characterization in the CEUS. Thus, a new regional SSC model provides a consistent, stable basis for computing PSHA for a future time span. Use of a new SSC model reduces the risk of delays in new plant licensing due to more conservative interpretations in the existing and future literature. Perspective The purpose of this study, jointly sponsored by EPRI, the U.S. Department of Energy (DOE), and the NRC was to develop a new CEUS SSC model. The team assembled to accomplish this purpose was composed of distinguished subject matter experts from industry, government, and academia. The resulting model is unique, and because this project has solicited input from the present-day larger technical community, it is not likely that there will be a need for significant revision for a number of years. See also Sponsors Perspective for more details. The goal of this project was to implement the CEUS SSC work plan for developing a regional CEUS SSC model. The work plan, formulated by the project manager and a technical integration team, consists of a series of tasks designed to meet the project objectives. This report was reviewed by a participatory peer review panel (PPRP), sponsor reviewers, the NRC, the U.S. Geological Survey, and other stakeholders. Comments from the PPRP and other reviewers were considered when preparing the report. The SSC model was completed at the end of 2011.« less

  8. Rethinking moment tensor inversion methods to retrieve the source mechanisms of low-frequency seismic events

    NASA Astrophysics Data System (ADS)

    Karl, S.; Neuberg, J.

    2011-12-01

    Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.

  9. Seismic generated infrasounds on Telluric Planets: Modeling and comparisons between Earth, Venus and Mars

    NASA Astrophysics Data System (ADS)

    Lognonne, P. H.; Rolland, L.; Karakostas, F. G.; Garcia, R.; Mimoun, D.; Banerdt, W. B.; Smrekar, S. E.

    2015-12-01

    Earth, Venus and Mars are all planets in which infrasounds can propagate and interact with the solid surface. This leads to infrasound generation for internal sources (e.g. quakes) and to seismic waves generations for atmospheric sources (e.g. meteor, impactor explosions, boundary layer turbulences). Both the atmospheric profile, surface density, atmospheric wind and viscous/attenuation processes are however greatly different, including major differences between Mars/Venus and Earth due to the CO2 molecular relaxation. We present modeling results and compare the seismic/acoustic coupling strength for Earth, Mars and Venus. This modeling is made through normal modes modelling for models integrating the interior, atmosphere, both with realistic attenuation (intrinsic Q for solid part, viscosity and molecular relaxation for the atmosphere). We complete these modeling, made for spherical structure, by integration of wind, assuming the later to be homogeneous at the scale of the infrasound wavelength. This allows us to compute either the Seismic normal modes (e.g. Rayleigh surface waves), or the acoustic or the atmospheric gravity modes. Comparisons are done, for either a seismic source or an atmospheric source, on the amplitude of expected signals as a function of distance and frequency. Effects of local time are integrated in the modeling. We illustrate the Rayleigh waves modelling by Earth data (for large quakes and volcanoes eruptions). For Venus, very large coupling can occur at resonance frequencies between the solid part and atmospheric part of the planet through infrasounds/Rayleigh waves coupling. If the atmosphere reduced the Q (quality coefficient) of Rayleigh waves in general, the atmosphere at these resonance soffers better propagation than Venus crust and increases their Q. For Mars, Rayleigh waves excitations by atmospheric burst is shown and discussed for the typical yield of impacts. The new data of the Nasa INSIGHT mission which carry both seismic and infrasound sensors will offer a unique confirmation in 2016-2017. We conclude with the seismic/infrasounds coupling on Venus which make the detection from space of seismic waves possible through the perturbation of the infrared airglow by infrassounds. Detection threshold as low as Magnitude 5.5 can be reached with existing technologies.

  10. Studying physical properties of deformed intact and fractured rocks by micro-scale hydro-mechanical-seismicity model

    NASA Astrophysics Data System (ADS)

    Raziperchikolaee, Samin

    The pore pressure variation in an underground formation during hydraulic stimulation of low permeability formations or CO2 sequestration into saline aquifers can induce microseismicity due to fracture generation or pre-existing fracture activation. While the analysis of microseismic data mainly focuses on mapping the location of fractures, the seismic waves generated by the microseismic events also contain information for understanding of fracture mechanisms based on microseismic source analysis. We developed a micro-scale geomechanics, fluid-flow and seismic model that can predict transport and seismic source behavior during rock failure. This model features the incorporation of microseismic source analysis in fractured and intact rock transport properties during possible rock damage and failure. The modeling method considers comprehensive grains and cements interaction through a bonded-particle-model. As a result of grain deformation and microcrack development in the rock sample, forces and displacements in the grains involved in the bond breakage are measured to determine seismic moment tensor. In addition, geometric description of the complex pore structure is regenerated to predict fluid flow behavior of fractured samples. Numerical experiments are conducted for different intact and fractured digital rock samples, representing various mechanical behaviors of rocks and fracture surface properties, to consider their roles on seismic and transport properties of rocks during deformation. Studying rock deformation in detail provides an opportunity to understand the relationship between source mechanism of microseismic events and transport properties of damaged rocks to have a better characterizing of fluid flow behavior in subsurface formations.

  11. Lithospheric Models of the Middle East to Improve Seismic Source Parameter Determination/Event Location Accuracy

    DTIC Science & Technology

    2012-09-01

    State Award Nos. DE-AC52-07NA27344/24.2.3.2 and DOS_SIAA-11-AVC/NMA-1 ABSTRACT The Middle East is a tectonically complex and seismically...active region. The ability to accurately locate earthquakes and other seismic events in this region is complicated by tectonics , the uneven...and seismic source parameters show that this activity comes from tectonic events. This work is informed by continuous or event-based regional

  12. A Study of Regional Waveform Calibration in the Eastern Mediterranean Region.

    NASA Astrophysics Data System (ADS)

    di Luccio, F.; Pino, A.; Thio, H.

    2002-12-01

    We modeled Pnl phases from several moderate magnitude events in the eastern Mediterranean to test methods and to develop path calibrations for source determination. The study region spanning from the eastern part of the Hellenic arc to the eastern Anatolian fault is mostly interested by moderate earthquakes, that can produce relevant damages. The selected area consists of several tectonic environment, which produces increased level of difficulty in waveform modeling. The results of this study are useful for the analysis of regional seismicity and for seismic hazard as well, in particular because very few broadband seismic stations are available in the selected area. The obtained velocity model gives a 30 km crustal tickness and low upper mantle velocities. The applied inversion procedure to determine the source mechanism has been successful, also in terms of discrimination of depth, for the entire range of selected paths. We conclude that using the true calibration of the seismic structure and high quality broadband data, it is possible to determine the seismic source in terms of mechanism, even with a single station.

  13. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 1: Model components for sources parameterization

    NASA Astrophysics Data System (ADS)

    Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana

    2017-11-01

    The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps. They can be relevant for the retrofitting of the existing building stock and for driving risk reduction interventions. These analyses do not account for regional M > 6 seismogenic sources which dominate the hazard over long return times (≥ 500 years).

  14. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    USGS Publications Warehouse

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui

    2016-01-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  15. Code for Calculating Regional Seismic Travel Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minusmore » predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less

  16. Tectonic evolution of the Mexico flat slab and patterns of intraslab seismicity.

    NASA Astrophysics Data System (ADS)

    Moresi, L. N.; Sandiford, D.

    2017-12-01

    The Cocos plate slab is horizontal for about 250 km beneath the Guerrero region of southern Mexico. Analogous morphologies can spontaneously develop in subduction models, through the presence of a low-viscosity mantle wedge. The Mw 7.1 Puebla earthquake appears to have ruptured the inboard corner of the Mexican flat slab; likely in close proximity to the mantle wedge corner. In addition to the historical seismic record, the Puebla earthquake provides a valuable constraint through which to assess geodynamic models for flat slab evolution. Slab deformation predicted by the "weak wedge" model is consistent with past seismicity in the both the upper plate and slab. Below the flat section, the slab is anomalously warm relative to its depth; the lack of seismicity in the deeper part of the slab fits the global pattern of temperature-controlled slab seismicity. This has implications for understanding the deeper structure of the slab, including the seismic hazard from source regions downdip of the Puebla rupture (epicenters closer to Mexico City). While historical seismicity provides a deformation pattern consistent with the weak wedge model , the Puebla earthquake is somewhat anomalous. The earthquake source mechanism is consistent with stress orientations in our models, however it maps to a region of relatively low deviatoric stress.

  17. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  18. The New Italian Seismic Hazard Model

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.

    2017-12-01

    In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme of the different components of the PSHA model that has been built through three different independent steps: a formal experts' elicitation, the outcomes of the testing phase, and the correlation between the outcomes. Finally, we explore through different techniques the influence on seismic hazard of the declustering procedure.

  19. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    NASA Astrophysics Data System (ADS)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  20. Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)

    NASA Astrophysics Data System (ADS)

    Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.

    2016-06-01

    We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.

  1. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  2. Active fault databases and seismic hazard calculations: a compromise between science and practice. Review of case studies from Spain.

    NASA Astrophysics Data System (ADS)

    Garcia-Mayordomo, Julian; Martin-Banda, Raquel; Insua-Arevalo, Juan Miguel; Alvarez-Gomez, Jose Antonio; Martinez-Diaz, Jose Jesus

    2017-04-01

    Since the Quaternary Active Faults Database of Iberia (QAFI) was released in February 2012 a number of studies aimed at producing seismic hazard assessments have made use of it. We will present a summary of the shortcomings and advantages that were faced when QAFI was considered in different seismic hazard studies. These include the production of the new official seismic hazard map of Spain, performed in the view of the foreseen adoption of Eurocode-8 throughout 2017. The QAFI database was considered as a complementary source of information for designing the seismogenic source-zone models used in the calculations, and particularly for the estimation of maximum magnitude distribution in each zone, as well as for assigning the predominant rupture mechanism based on style of faulting. We will also review the different results obtained by other studies that considered QAFI faults as independent seismogenic-sources in opposition to source-zones, revealing, on one hand, the crucial importance of data-reliability and, on the other, the very much influence that ground motion attenuation models have on the actual impact of fault-sources on hazard results. Finally, we will present briefly the updated version of the database (QAFI v.3, 2015), which includes an original scheme for evaluating the reliability of fault seismic parameters specifically devised to facilitate decision-making to seismic hazard practitioners.

  3. Constitutive law for seismicity rate based on rate and state friction: Dieterich 1994 revisited.

    NASA Astrophysics Data System (ADS)

    Heimisson, E. R.; Segall, P.

    2017-12-01

    Dieterich [1994] derived a constitutive law for seismicity rate based on rate and state friction, which has been applied widely to aftershocks, earthquake triggering, and induced seismicity in various geological settings. Here, this influential work is revisited, and re-derived in a more straightforward manner. By virtue of this new derivation the model is generalized to include changes in effective normal stress associated with background seismicity. Furthermore, the general case when seismicity rate is not constant under constant stressing rate is formulated. The new derivation provides directly practical integral expressions for the cumulative number of events and rate of seismicity for arbitrary stressing history. Arguably, the most prominent limitation of Dieterich's 1994 theory is the assumption that seismic sources do not interact. Here we derive a constitutive relationship that considers source interactions between sub-volumes of the crust, where the stress in each sub-volume is assumed constant. Interactions are considered both under constant stressing rate conditions and for arbitrary stressing history. This theory can be used to model seismicity rate due to stress changes or to estimate stress changes using observed seismicity from triggered earthquake swarms where earthquake interactions and magnitudes are take into account. We identify special conditions under which influence of interactions cancel and the predictions reduces to those of Dieterich 1994. This remarkable result may explain the apparent success of the model when applied to observations of triggered seismicity. This approach has application to understanding and modeling induced and triggered seismicity, and the quantitative interpretation of geodetic and seismic data. It enables simultaneous modeling of geodetic and seismic data in a self-consistent framework. To date physics-based modeling of seismicity with or without geodetic data has been found to give insight into various processes related to aftershocks, VT and injection-induced seismicity. However, the role of various processes such as earthquake interactions and magnitudes and effective normal stress has been unclear. The new theory presented resolves some of the pertinent issues raised in the literature with application of the Dieterich 1994 model.

  4. Source model for the Copahue volcano magma plumbing system constrained by InSAR surface deformation observations

    NASA Astrophysics Data System (ADS)

    Lundgren, Paul; Nikkhoo, Mehdi; Samsonov, Sergey V.; Milillo, Pietro; Gil-Cruz, Fernando; Lazo, Jonathan

    2017-07-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentina border in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue's source models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformation observations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span the entire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively, slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending track time series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate of volume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at 2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting the shallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonic seismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time series also show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations for right-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera for both RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion within the caldera are caused by the modeled sources. Together, the InSAR-constrained source model and the seismicity suggest a deep conduit or transfer zone where magma moves from the central caldera to Copahue's upper edifice.

  5. The Seismotectonic Model of Southern Africa

    NASA Astrophysics Data System (ADS)

    Midzi, Vunganai; Mulabisana, Thifelimbulu; Manzunzu, Brassnavy

    2013-04-01

    Presented in this report is a summary of the major structures and seismotectonic zones in Southern Africa (Botswana, Lesotho, Namibia, South Africa and Swaziland), which includes available information on fault plane solutions and stress data. Reports published by several experts contributed much to the prepared zones. The work was prepared as part of the requirements for the SIDA/IGCP Project 601 titled "Seismotectonics and Seismic Hazards in Africa" as well as part of the seismic source characterisation of the GEM-Africa Seismic hazard study. The seismic data used are part of the earthquake catalogue being prepared for the GEM-Africa project, which includes historical and instrumental records as collected from various agencies. Seventeen seismic zones/sources were identified and demarcated using all the available information. Two of the identiied sources are faults with reliable evidence of their activity. Though more faults have been identified in unpublished material as being active, more work is being carried out to obtain information that can be used to characterise them before they are included in the seismotectonic model. Explanations for the selected boundaries of the zones are also given in the report. It should be noted that this information is the first draft of the seismic source zones of the region. Futher interpreation of the data is envisaged which might result in more than one version of the zones.

  6. Discrepancy between earthquake rates implied by historic earthquakes and a consensus geologic source model for California

    USGS Publications Warehouse

    Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.

    2000-01-01

    We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the standard CDMG-USGS model by less than 10% across most of California but is higher (generally about 10% to 30%) within 20 km from some faults.

  7. Amplitude and Frequency Experimental Field Measurements of a Rotating-Imbalance Seismic Source Associated with Changes in Lithology Surrounding a Borehole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephen R. Novascone; Michael J. Anderson; David M. Weinberg

    2003-10-01

    Field measurements of the vibration amplitude of a rotating-imbalance seismic source in a liquid-filled borehole are described. The borehole was a cased oil well that had been characterized by gamma-ray cement bond and compensated neutron litho-density/gamma-ray logs. The well logs indicated an abrupt transition from shale to limestone at a depth of 2638 ft. The vibration amplitude and frequency of a rotating-imbalance seismic source was measured versus applied voltage as the source was raised from 2654 to 2618 ft through the shale–limestone transition. It was observed that the vibration amplitude changed by approximately 10% in magnitude and the frequency changedmore » approximately 15% as the source passed the shale–limestone transition. The measurements were compared to predictions provided by a two-dimensional analytical model of a rotating-imbalance source located in a liquid-filled bore hole. It was observed that the sensitivity of the experimentally measured vibration amplitude of the seismic source to the properties of the surrounding geologic media was an order of magnitude greater than that predicted by the two-dimensional analytical model.« less

  8. Use of raster-based data layers to model spatial variation of seismotectonic data in probabilistic seismic hazard assessment

    NASA Astrophysics Data System (ADS)

    Zolfaghari, Mohammad R.

    2009-07-01

    Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.

  9. On the use of a laser ablation as a laboratory seismic source

    NASA Astrophysics Data System (ADS)

    Shen, Chengyi; Brito, Daniel; Diaz, Julien; Zhang, Deyuan; Poydenot, Valier; Bordes, Clarisse; Garambois, Stéphane

    2017-04-01

    Mimic near-surface seismic imaging conducted in well-controlled laboratory conditions is potentially a powerful tool to study large scale wave propagations in geological media by means of upscaling. Laboratory measurements are indeed particularly suited for tests of theoretical modellings and comparisons with numerical approaches. We have developed an automated Laser Doppler Vibrometer (LDV) platform, which is able to detect and register broadband nano-scale displacements on the surface of various materials. This laboratory equipment has already been validated in experiments where piezoelectric transducers were used as seismic sources. We are currently exploring a new seismic source in our experiments, a laser ablation, in order to compensate some drawbacks encountered with piezoelectric sources. The laser ablation source is considered to be an interesting ultrasound wave generator since the 1960s. It was believed to have numerous potential applications such as the Non-Destructive Testing (NDT) and the measurements of velocities and attenuations in solid samples. We aim at adapting and developing this technique into geophysical experimental investigations in order to produce and explore complete micro-seismic data sets in the laboratory. We will first present the laser characteristics including its mechanism, stability, reproducibility, and will evaluate in particular the directivity patterns of such a seismic source. We have started by applying the laser ablation source on the surfaces of multi-scale homogeneous aluminum samples and are now testing it on heterogeneous and fractured limestone cores. Some other results of data processing will also be shown, especially the 2D-slice V P and V S tomographic images obtained in limestone samples. Apart from the experimental records, numerical simulations will be carried out for both the laser source modelling and the wave propagation in different media. First attempts will be done to compare quantitatively the experimental data with simulations. Meanwhile, CT-scan X-ray images of these limestone cores will be used to check the relative pertinences of velocity tomography images produced by this newly developed laser ablation seismic source.

  10. Evaluation for relationship among source parameters of underground nuclear tests in Northern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Kim, G.; Che, I. Y.

    2017-12-01

    We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.

  11. A study of infrasonic anisotropy and multipathing in the atmosphere using seismic networks.

    PubMed

    Hedlin, Michael A H; Walker, Kristoffer T

    2013-02-13

    We discuss the use of reverse time migration (RTM) with dense seismic networks for the detection and location of sources of atmospheric infrasound. Seismometers measure the response of the Earth's surface to infrasound through acoustic-to-seismic coupling. RTM has recently been applied to data from the USArray network to create a catalogue of infrasonic sources in the western US. Specifically, several hundred sources were detected in 2007-2008, many of which were not observed by regional infrasonic arrays. The influence of the east-west stratospheric zonal winds is clearly seen in the seismic data with most detections made downwind of the source. We study this large-scale anisotropy of infrasonic propagation, using a winter and summer source in Idaho. The bandpass-filtered (1-5 Hz) seismic waveforms reveal in detail the two-dimensional spread of the infrasonic wavefield across the Earth's surface within approximately 800 km of the source. Using three-dimensional ray tracing, we find that the stratospheric winds above 30 km altitude in the ground-to-space (G2S) atmospheric model explain well the observed anisotropy pattern. We also analyse infrasound from well-constrained explosions in northern Utah with a denser IRIS PASSCAL seismic network. The standard G2S model correctly predicts the anisotropy of the stratospheric duct, but it incorrectly predicts the dimensions of the shadow zones in the downwind direction. We show that the inclusion of finer-scale structure owing to internal gravity waves infills the shadow zones and predicts the observed time durations of the signals. From the success of this method in predicting the observations, we propose that multipathing owing to fine scale, layer-cake structure is the primary mechanism governing propagation for frequencies above approximately 1 Hz and infer that stochastic approaches incorporating internal gravity waves are a useful improvement to the standard G2S model for infrasonic propagation modelling.

  12. Surface Deformation and Source Model at Semisopochnoi Volcano from InSAR and Seismic Analysis During the 2014 and 2015 Seismic Swarms

    NASA Astrophysics Data System (ADS)

    DeGrandpre, K.; Pesicek, J. D.; Lu, Z.

    2016-12-01

    During the summer of 2014 and the early spring of 2015 two notable increases in seismic activity at Semisopochnoi volcano in the western Aleutian islands were recorded on AVO seismometers on Semisopochnoi and neighboring islands. These seismic swarms did not lead to an eruption. This study employs differential SAR techniques using TerraSAR-X images in conjunction with more accurately relocating the recorded seismic events through simultaneous inversion of event travel times and a three-dimensional velocity model using tomoDD. The interferograms created from the SAR images exhibit surprising coherence and an island wide spatial distribution of inflation that is then used in a Mogi model in order to define the three-dimensional location and volume change required for a source at Semisopochnoi to produce the observed surface deformation. The tomoDD relocations provide a more accurate and realistic three-dimensional velocity model as well as a tighter clustering of events for both swarms that clearly outline a linear seismic void within the larger group of shallow (<10 km) seismicity. While no direct conclusions as to the relationship of these seismic events and the observed surface deformation can be made at this time, these techniques are both complimentary and efficient forms of remotely monitoring volcanic activity that provide much deeper insights into the processes involved without having to risk hazardous or costly field work.

  13. Seismic Source Scaling and Characteristics of Six North Korean Underground Nuclear Explosions

    NASA Astrophysics Data System (ADS)

    Park, J.; Stump, B. W.; Che, I. Y.; Hayward, C.

    2017-12-01

    We estimate the range of yields and source depths for the six North Korean underground nuclear explosions in 2006, 2009, 2013, 2016 (January and September), and 2017, based on regional seismic observations in South Korea and China. Seismic data used in this study are from three seismo-acoustic stations, BRDAR, CHNAR, and KSGAR, cooperatively operated by SMU and KIGAM, the KSRS seismic array operated by the Comprehensive Nuclear Test Ban Treaty Organization, and MDJ, a station in the Global Seismographic Network. We calculate spectral ratios for event pairs using seismograms from the six explosions observed along the same paths and at the same receivers. These relative seismic source scaling spectra for Pn, Pg, Sn, and surface wave windows provide a basis for a grid search source solution that estimates source yield and depth for each event based on both the modified Mueller and Murphy (1971; MM71) and Denny and Johnson (1991; DJ91) source models. The grid search is used to identify the best-fit empirical spectral ratios subject to the source models by minimizing the goodness-of-fit (GOF) in the frequency range of 0.5-15 Hz. For all cases, the DJ91 model produces higher ratios of depth and yield than MM71. These initial results include significant trade-offs between depth and yield in all cases. In order to better take the effect of source depth into account, a modified grid search was implemented that includes the propagation effects for different source depths by including reflectivity Greens functions in the grid search procedure. This revision reduces the trade-offs between depth and yield, results in better model fits to frequencies as high as 15 Hz, and GOF values smaller than those where the depth effects on the Greens functions were ignored. The depth and yield estimates for all six explosions using this new procedure will be presented.

  14. Seismic hazard assessment of the Province of Murcia (SE Spain): analysis of source contribution to hazard

    NASA Astrophysics Data System (ADS)

    García-Mayordomo, J.; Gaspar-Escribano, J. M.; Benito, B.

    2007-10-01

    A probabilistic seismic hazard assessment of the Province of Murcia in terms of peak ground acceleration (PGA) and spectral accelerations [SA( T)] is presented in this paper. In contrast to most of the previous studies in the region, which were performed for PGA making use of intensity-to-PGA relationships, hazard is here calculated in terms of magnitude and using European spectral ground-motion models. Moreover, we have considered the most important faults in the region as specific seismic sources, and also comprehensively reviewed the earthquake catalogue. Hazard calculations are performed following the Probabilistic Seismic Hazard Assessment (PSHA) methodology using a logic tree, which accounts for three different seismic source zonings and three different ground-motion models. Hazard maps in terms of PGA and SA(0.1, 0.2, 0.5, 1.0 and 2.0 s) and coefficient of variation (COV) for the 475-year return period are shown. Subsequent analysis is focused on three sites of the province, namely, the cities of Murcia, Lorca and Cartagena, which are important industrial and tourism centres. Results at these sites have been analysed to evaluate the influence of the different input options. The most important factor affecting the results is the choice of the attenuation relationship, whereas the influence of the selected seismic source zonings appears strongly site dependant. Finally, we have performed an analysis of source contribution to hazard at each of these cities to provide preliminary guidance in devising specific risk scenarios. We have found that local source zones control the hazard for PGA and SA( T ≤ 1.0 s), although contribution from specific fault sources and long-distance north Algerian sources becomes significant from SA(0.5 s) onwards.

  15. Exploring the Dynamics of the August 2010 Mount Meager Rock Slide-Debris Flow Jointly with Seismic Source Inversion and Numerical Landslide Modeling

    NASA Astrophysics Data System (ADS)

    Allstadt, K.; Moretti, L.; Mangeney, A.; Stutzmann, E.; Capdeville, Y.

    2014-12-01

    The time series of forces exerted on the earth by a large and rapid landslide derived remotely from the inversion of seismic records can be used to tie post-slide evidence to what actually occurred during the event and can be used to tune numerical models and test theoretical methods. This strategy is applied to the 48.5 Mm3 August 2010 Mount Meager rockslide-debris flow in British Columbia, Canada. By inverting data from just five broadband seismic stations less than 300 km from the source, we reconstruct the time series of forces that the landslide exerted on the Earth as it occurred. The result illuminates a complex retrogressive initiation sequence and features attributable to flow over a complicated path including several curves and runup against a valley wall. The seismically derived force history also allows for the estimation of the horizontal acceleration (0.39 m/s^2) and average apparent coefficient of basal friction (0.38) of the rockslide, and the speed of the center of mass of the debris flow (peak of 92 m/s). To extend beyond these simple calculations and to test the interpretation, we also use the seismically derived force history to guide numerical modeling of the event - seeking to simulate the landslide in a way that best fits both the seismic and field constraints. This allows for a finer reconstruction of the volume, timing, and sequence of events, estimates of friction, and spatiotemporal variations in speed and flow thickness. The modeling allowed us to analyze the sensitivity of the force to the different parameters involved in the landslide modeling to better understand what can and cannot be constrained from seismic source inversions of landslide signals.

  16. A New Seismic Hazard Model for Mainland China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.

    2017-12-01

    We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.

  17. Investigating source processes of isotropic events

    NASA Astrophysics Data System (ADS)

    Chiang, Andrea

    This dissertation demonstrates the utility of the complete waveform regional moment tensor inversion for nuclear event discrimination. I explore the source processes and associated uncertainties for explosions and earthquakes under the effects of limited station coverage, compound seismic sources, assumptions in velocity models and the corresponding Green's functions, and the effects of shallow source depth and free-surface conditions. The motivation to develop better techniques to obtain reliable source mechanism and assess uncertainties is not limited to nuclear monitoring, but they also provide quantitative information about the characteristics of seismic hazards, local and regional tectonics and in-situ stress fields of the region . This dissertation begins with the analysis of three sparsely recorded events: the 14 September 1988 US-Soviet Joint Verification Experiment (JVE) nuclear test at the Semipalatinsk test site in Eastern Kazakhstan, and two nuclear explosions at the Chinese Lop Nor test site. We utilize a regional distance seismic waveform method fitting long-period, complete, three-component waveforms jointly with first-motion observations from regional stations and teleseismic arrays. The combination of long period waveforms and first motion observations provides unique discrimination of these sparsely recorded events in the context of the Hudson et al. (1989) source-type diagram. We examine the effects of the free surface on the moment tensor via synthetic testing, and apply the moment tensor based discrimination method to well-recorded chemical explosions. These shallow chemical explosions represent rather severe source-station geometry in terms of the vanishing traction issues. We show that the combined waveform and first motion method enables the unique discrimination of these events, even though the data include unmodeled single force components resulting from the collapse and blowout of the quarry face immediately following the initial explosion. In contrast, recovering the announced explosive yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique. The estimation of seismic source parameters is dependent upon having a well-calibrated velocity model to compute the Green's functions for the inverse problem. Ideally, seismic velocity models are calibrated through broadband waveform modeling, however in regions of low seismicity velocity models derived from body or surface wave tomography may be employed. Whether a velocity model is 1D or 3D, or based on broadband seismic waveform modeling or the various tomographic techniques, the uncertainty in the velocity model can be the greatest source of error in moment tensor inversion. These errors have not been fully investigated for the nuclear discrimination problem. To study the effects of unmodeled structures on the moment tensor inversion, we set up a synthetic experiment where we produce synthetic seismograms for a 3D model (Moschetti et al., 2010) and invert these data using Green's functions computed with a 1D velocity mode (Song et al., 1996) to evaluate the recoverability of input solutions, paying particular attention to biases in the isotropic component. The synthetic experiment results indicate that the 1D model assumption is valid for moment tensor inversions at periods as short as 10 seconds for the 1D western U.S. model (Song et al., 1996). The correct earthquake mechanisms and source depth are recovered with statistically insignificant isotropic components as determined by the F-test. Shallow explosions are biased by the theoretical ISO-CLVD tradeoff but the tectonic release component remains low, and the tradeoff can be eliminated with constraints from P wave first motion. Path-calibration to the 1D model can reduce non-double-couple components in earthquakes, non-isotropic components in explosions and composite sources and improve the fit to the data. When we apply the 3D model to real data, at long periods (20-50 seconds), we see good agreement in the solutions between the 1D and 3D models and slight improvement in waveform fits when using the 3D velocity model Green's functions. (Abstract shortened by ProQuest.).

  18. Annual Rates on Seismogenic Italian Sources with Models of Long-Term Predictability for the Time-Dependent Seismic Hazard Assessment In Italy

    NASA Astrophysics Data System (ADS)

    Murru, Maura; Falcone, Giuseppe; Console, Rodolfo

    2016-04-01

    The present study is carried out in the framework of the Center for Seismic Hazard (CPS) INGV, under the agreement signed in 2015 with the Department of Civil Protection for developing a new model of seismic hazard of the country that can update the current reference (MPS04-S1; zonesismiche.mi.ingv.it and esse1.mi.ingv.it) released between 2004 and 2006. In this initiative, we participate with the Long-Term Stress Transfer (LTST) Model to provide the annual occurrence rate of a seismic event on the entire Italian territory, from a Mw4.5 minimum magnitude, considering bins of 0.1 magnitude units on geographical cells of 0.1° x 0.1°. Our methodology is based on the fusion of a statistical time-dependent renewal model (Brownian Passage Time, BPT, Matthews at al., 2002) with a physical model which considers the permanent effect in terms of stress that undergoes a seismogenic source in result of the earthquakes that occur on surrounding sources. For each considered catalog (historical, instrumental and individual seismogenic sources) we determined a distinct rate value for each cell of 0.1° x 0.1° for the next 50 yrs. If the cell falls within one of the sources in question, we adopted the respective value of rate, which is referred only to the magnitude of the event characteristic. This value of rate is divided by the number of grid cells that fall on the horizontal projection of the source. If instead the cells fall outside of any seismic source we considered the average value of the rate obtained from the historical and the instrumental catalog, using the method of Frankel (1995). The annual occurrence rate was computed for any of the three considered distributions (Poisson, BPT and BPT with inclusion of stress transfer).

  19. Waveform-based Bayesian full moment tensor inversion and uncertainty determination for the induced seismicity in an oil/gas field

    NASA Astrophysics Data System (ADS)

    Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi

    2018-03-01

    Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.

  20. The SCEC Unified Community Velocity Model (UCVM) Software Framework for Distributing and Querying Seismic Velocity Models

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.

    2017-12-01

    Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications. In this poster, we summarize the key components of the UCVM framework and describe the impact it has had in various computational geoscientific applications.

  1. The energy release in earthquakes, and subduction zone seismicity and stress in slabs. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Vassiliou, M. S.

    1983-01-01

    Energy release in earthquakes is discussed. Dynamic energy from source time function, a simplified procedure for modeling deep focus events, static energy estimates, near source energy studies, and energy and magnitude are addressed. Subduction zone seismicity and stress in slabs are also discussed.

  2. Analysis the Source model of the 2009 Mw 7.6 Padang Earthquake in Sumatra Region using continuous GPS data

    NASA Astrophysics Data System (ADS)

    Amertha Sanjiwani, I. D. M.; En, C. K.; Anjasmara, I. M.

    2017-12-01

    A seismic gap on the interface along the Sunda subduction zone has been proposed among the 2000, 2004, 2005 and 2007 great earthquakes. This seismic gap therefore plays an important role in the earthquake risk on the Sunda trench. The Mw 7.6 Padang earthquake, an intraslab event, was occurred on September 30, 2009 located at ± 250 km east of the Sunda trench, close to the seismic gap on the interface. To understand the interaction between the seismic gap and the Padang earthquake, twelves continuous GPS data from SUGAR are adopted in this study to estimate the source model of this event. The daily GPS coordinates one month before and after the earthquake were calculated by the GAMIT software. The coseismic displacements were evaluated based on the analysis of coordinate time series in Padang region. This geodetic network provides a rather good spatial coverage for examining the seismic source along the Padang region in detail. The general pattern of coseismic horizontal displacements is moving toward epicenter and also the trench. The coseismic vertical displacement pattern is uplift. The highest coseismic displacement derived from the MSAI station are 35.0 mm for horizontal component toward S32.1°W and 21.7 mm for vertical component. The second largest one derived from the LNNG station are 26.6 mm for horizontal component toward N68.6°W and 3.4 mm for vertical component. Next, we will use uniform stress drop inversion to invert the coseismic displacement field for estimating the source model. Then the relationship between the seismic gap on the interface and the intraslab Padang earthquake will be discussed in the next step. Keyword: seismic gap, Padang earthquake, coseismic displacement.

  3. Detailed Velocity and Density models of the Cascadia Subduction Zone from Prestack Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Fortin, W.; Holbrook, W. S.; Mallick, S.; Everson, E. D.; Tobin, H. J.; Keranen, K. M.

    2014-12-01

    Understanding the geologic composition of the Cascadia Subduction Zone (CSZ) is critically important in assessing seismic hazards in the Pacific Northwest. Despite being a potential earthquake and tsunami threat to millions of people, key details of the structure and fault mechanisms remain poorly understood in the CSZ. In particular, the position and character of the subduction interface remains elusive due to its relative aseismicity and low seismic reflectivity, making imaging difficult for both passive and active source methods. Modern active-source reflection seismic data acquired as part of the COAST project in 2012 provide an opportunity to study the transition from the Cascadia basin, across the deformation front, and into the accretionary prism. Coupled with advances in seismic inversion methods, this new data allow us to produce detailed velocity models of the CSZ and accurate pre-stack depth migrations for studying geologic structure. While still computationally expensive, current computing clusters can perform seismic inversions at resolutions that match that of the seismic image itself. Here we present pre-stack full waveform inversions of the central seismic line of the COAST survey offshore Washington state. The resultant velocity model is produced by inversion at every CMP location, 6.25 m laterally, with vertical resolution of 0.2 times the dominant seismic frequency. We report a good average correlation value above 0.8 across the entire seismic line, determined by comparing synthetic gathers to the real pre-stack gathers. These detailed velocity models, both Vp and Vs, along with the density model, are a necessary step toward a detailed porosity cross section to be used to determine the role of fluids in the CSZ. Additionally, the P-velocity model is used to produce a pre-stack depth migration image of the CSZ.

  4. Improved earthquake monitoring in the central and eastern United States in support of seismic assessments for critical facilities

    USGS Publications Warehouse

    Leith, William S.; Benz, Harley M.; Herrmann, Robert B.

    2011-01-01

    Evaluation of seismic monitoring capabilities in the central and eastern United States for critical facilities - including nuclear powerplants - focused on specific improvements to understand better the seismic hazards in the region. The report is not an assessment of seismic safety at nuclear plants. To accomplish the evaluation and to provide suggestions for improvements using funding from the American Recovery and Reinvestment Act of 2009, the U.S. Geological Survey examined addition of new strong-motion seismic stations in areas of seismic activity and addition of new seismic stations near nuclear power-plant locations, along with integration of data from the Transportable Array of some 400 mobile seismic stations. Some 38 and 68 stations, respectively, were suggested for addition in active seismic zones and near-power-plant locations. Expansion of databases for strong-motion and other earthquake source-characterization data also was evaluated. Recognizing pragmatic limitations of station deployment, augmentation of existing deployments provides improvements in source characterization by quantification of near-source attenuation in regions where larger earthquakes are expected. That augmentation also supports systematic data collection from existing networks. The report further utilizes the application of modeling procedures and processing algorithms, with the additional stations and the improved seismic databases, to leverage the capabilities of existing and expanded seismic arrays.

  5. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general, estimates of B and K are based on the initial P-wave pulse, which various numerical analyses show to be least affected by variations in near-source path effects. The corner-frequency parameter K is 20% lower at NTS (Pahute) than at other sites, implying larger effective source radii. The overshoot parameter B appears to be low at NTS (although variable) relative to other sites and is probably due to variations in source conditions. For a low B, the near-field data require a higher value of ψ ∞ to match the long-period MS and short-period mb observations. This flexibility in modeling proves useful in comparing released FSU yields against predictions based on mb and MS.

  6. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less

  7. Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Kubo, H.

    2013-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.

  8. Deep-towed high resolution seismic imaging II: Determination of P-wave velocity distribution

    NASA Astrophysics Data System (ADS)

    Marsset, B.; Ker, S.; Thomas, Y.; Colin, F.

    2018-02-01

    The acquisition of high resolution seismic data in deep waters requires the development of deep towed seismic sources and receivers able to deal with the high hydrostatic pressure environment. The low frequency piezoelectric transducer of the SYSIF (SYstème Sismique Fond) deep towed seismic device comply with the former requirement taking advantage of the coupling of a mechanical resonance (Janus driver) and a fluid resonance (Helmholtz cavity) to produce a large frequency bandwidth acoustic signal (220-1050 Hz). The ability to perform deep towed multichannel seismic imaging with SYSIF was demonstrated in 2014, yet, the ability to determine P-wave velocity distribution wasn't achieved. P-wave velocity analysis relies on the ratio between the source-receiver offset range and the depth of the seismic reflectors, thus towing the seismic source and receivers closer to the sea bed will provide a better geometry for P-wave velocity determination. Yet, technical issues, related to the acoustic source directivity, arise for this approach in the particular framework of piezoelectric sources. A signal processing sequence is therefore added to the initial processing flow. Data acquisition took place during the GHASS (Gas Hydrates, fluid Activities and Sediment deformations in the western Black Sea) cruise in the Romanian waters of the Black Sea. The results of the imaging processing are presented for two seismic data sets acquired over gas hydrates and gas bearing sediments. The improvement in the final seismic resolution demonstrates the validity of the velocity model.

  9. Source-independent full waveform inversion of seismic data

    DOEpatents

    Lee, Ki Ha

    2006-02-14

    A set of seismic trace data is collected in an input data set that is first Fourier transformed in its entirety into the frequency domain. A normalized wavefield is obtained for each trace of the input data set in the frequency domain. Normalization is done with respect to the frequency response of a reference trace selected from the set of seismic trace data. The normalized wavefield is source independent, complex, and dimensionless. The normalized wavefield is shown to be uniquely defined as the normalized impulse response, provided that a certain condition is met for the source. This property allows construction of the inversion algorithm disclosed herein, without any source or source coupling information. The algorithm minimizes the error between data normalized wavefield and the model normalized wavefield. The methodology is applicable to any 3-D seismic problem, and damping may be easily included in the process.

  10. Seismic hazard assessment over time: Modelling earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting

    2017-04-01

    To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

  11. Dominant seismic sources for the cities in South Sumatra

    NASA Astrophysics Data System (ADS)

    Sunardi, Bambang; Sakya, Andi Eka; Masturyono, Murjaya, Jaya; Rohadi, Supriyanto; Sulastri, Putra, Ade Surya

    2017-07-01

    Subduction zone along west of Sumatra and Sumatran fault zone are active seismic sources. Seismotectonically, South Sumatra could be affected by earthquakes triggered by these seismic sources. This paper discussed contribution of each seismic source to earthquake hazards for cities of Palembang, Prabumulih, Banyuasin, OganIlir, Ogan Komering Ilir, South Oku, Musi Rawas and Empat Lawang. These hazards are presented in form of seismic hazard curves. The study was conducted by using Probabilistic Seismic Hazard Analysis (PSHA) of 2% probability of exceedance in 50 years. Seismic sources used in analysis included megathrust zone M2 of Sumatra and South Sumatra, background seismic sources and shallow crustal seismic sources consist of Ketaun, Musi, Manna and Kumering faults. The results of the study showed that for cities relatively far from the seismic sources, subduction / megathrust seismic source with a depth ≤ 50 km greatly contributed to the seismic hazard and the other areas showed deep background seismic sources with a depth of more than 100 km dominate to seismic hazard respectively.

  12. A study of regional waveform calibration in the eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Di Luccio, F.; Pino, N. A.; Thio, H. K.

    2003-06-01

    We modeled P nl phases from several moderate magnitude earthquakes in the eastern Mediterranean to test methods and develop path calibrations for determining source parameters. The study region, which extends from the eastern part of the Hellenic arc to the eastern Anatolian fault, is dominated by moderate earthquakes that can produce significant damage. Our results are useful for analyzing regional seismicity as well as seismic hazard, because very few broadband seismic stations are available in the selected area. For the whole region we have obtained a single velocity model characterized by a 30 km thick crust, low upper mantle velocities and a very thin lid overlaying a distinct low velocity layer. Our preferred model proved quite reliable for determining focal mechanism and seismic moment across the entire range of selected paths. The source depth is also well constrained, especially for moderate earthquakes.

  13. Seismic source parameters of the induced seismicity at The Geysers geothermal area, California, by a generalized inversion approach

    NASA Astrophysics Data System (ADS)

    Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia

    2017-04-01

    The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.

  14. Probabilistic Seismic Hazard Maps for Ecuador

    NASA Astrophysics Data System (ADS)

    Mariniere, J.; Beauval, C.; Yepes, H. A.; Laurence, A.; Nocquet, J. M.; Alvarado, A. P.; Baize, S.; Aguilar, J.; Singaucho, J. C.; Jomard, H.

    2017-12-01

    A probabilistic seismic hazard study is led for Ecuador, a country facing a high seismic hazard, both from megathrust subduction earthquakes and shallow crustal moderate to large earthquakes. Building on the knowledge produced in the last years in historical seismicity, earthquake catalogs, active tectonics, geodynamics, and geodesy, several alternative earthquake recurrence models are developed. An area source model is first proposed, based on the seismogenic crustal and inslab sources defined in Yepes et al. (2016). A slightly different segmentation is proposed for the subduction interface, with respect to Yepes et al. (2016). Three earthquake catalogs are used to account for the numerous uncertainties in the modeling of frequency-magnitude distributions. The hazard maps obtained highlight several source zones enclosing fault systems that exhibit low seismic activity, not representative of the geological and/or geodetical slip rates. Consequently, a fault model is derived, including faults with an earthquake recurrence model inferred from geological and/or geodetical slip rate estimates. The geodetical slip rates on the set of simplified faults are estimated from a GPS horizontal velocity field (Nocquet et al. 2014). Assumptions on the aseismic component of the deformation are required. Combining these alternative earthquake models in a logic tree, and using a set of selected ground-motion prediction equations adapted to Ecuador's different tectonic contexts, a mean hazard map is obtained. Hazard maps corresponding to the percentiles 16 and 84% are also derived, highlighting the zones where uncertainties on the hazard are highest.

  15. Key science issues in the central and eastern United States for the next version of the USGS National Seismic Hazard Maps

    USGS Publications Warehouse

    Peterson, M.D.; Mueller, C.S.

    2011-01-01

    The USGS National Seismic Hazard Maps are updated about every six years by incorporating newly vetted science on earthquakes and ground motions. The 2008 hazard maps for the central and eastern United States region (CEUS) were updated by using revised New Madrid and Charleston source models, an updated seismicity catalog and an estimate of magnitude uncertainties, a distribution of maximum magnitudes, and several new ground-motion prediction equations. The new models resulted in significant ground-motion changes at 5 Hz and 1 Hz spectral acceleration with 5% damping compared to the 2002 version of the hazard maps. The 2008 maps have now been incorporated into the 2009 NEHRP Recommended Provisions, the 2010 ASCE-7 Standard, and the 2012 International Building Code. The USGS is now planning the next update of the seismic hazard maps, which will be provided to the code committees in December 2013. Science issues that will be considered for introduction into the CEUS maps include: 1) updated recurrence models for New Madrid sources, including new geodetic models and magnitude estimates; 2) new earthquake sources and techniques considered in the 2010 model developed by the nuclear industry; 3) new NGA-East ground-motion models (currently under development); and 4) updated earthquake catalogs. We will hold a regional workshop in late 2011 or early 2012 to discuss these and other issues that will affect the seismic hazard evaluation in the CEUS.

  16. New comprehensive standard seismic noise models and 3D seismic noise variation for Morocco territory, North Africa, obtained using seismic broadband stations

    NASA Astrophysics Data System (ADS)

    El Fellah, Younes; El-Aal, Abd El-Aziz Khairy Abd; Harnafi, Mimoun; Villaseñor, Antonio

    2017-05-01

    In the current work, we constructed new comprehensive standard seismic noise models and 3D temporal-spatial seismic noise level cubes for Morocco in north-west Africa to be used for seismological and engineering purposes. Indeed, the original global standard seismic noise models published by Peterson (1993) and their following updates by Astiz and Creager (1995), Ekström (2001) and Berger et al. (2003) had no contributing seismic stations deployed in North Africa. Consequently, this preliminary study was conducted to shed light on seismic noise levels specific to north-west Africa. For this purpose, 23 broadband seismic stations recently installed in different structural domains throughout Morocco are used to study the nature and characteristics of seismic noise and to create seismic noise models for Morocco. Continuous data recorded during 2009, 2010 and 2011 were processed and analysed to construct these new noise models and 3D noise levels from all stations. We compared the Peterson new high-noise model (NHNM) and low-noise model (NLNM) with the Moroccan high-noise model (MHNM) and low-noise model (MLNM). These new noise models are comparable to the United States Geological Survey (USGS) models in the short period band; however, in the period range 1.2 s to 1000 s for MLNM and 10 s to 1000 s for MHNM display significant variations. This variation is attributed to differences in the nature of seismic noise sources that dominate Morocco in these period bands. The results of this study have a new perception about permanent seismic noise models for this spectacular region and can be considered a significant contribution because it supplements the Peterson models and can also be used to site future permanent seismic stations in Morocco.

  17. Duration of Tsunami Generation Longer than Duration of Seismic Wave Generation in the 2011 Mw 9.0 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Fujihara, S.; Korenaga, M.; Kawaji, K.; Akiyama, S.

    2013-12-01

    We try to compare and evaluate the nature of tsunami generation and seismic wave generation in occurrence of the 2011 Tohoku-Oki earthquake (hereafter, called as TOH11), in terms of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms. Since 1970's, the nature of "tsunami earthquakes" has been discussed in many researches (e.g. Kanamori, 1972; Kanamori and Kikuchi, 1993; Kikuchi and Kanamori, 1995; Ide et al., 1993; Satake, 1994) mostly based on analysis of seismic waveform data , in terms of the "slow" nature of tsunami earthquakes (e.g., the 1992 Nicaragura earthquake). Although TOH11 is not necessarily understood as a tsunami earthquake, TOH11 is one of historical earthquakes that simultaneously generated large seismic waves and tsunami. Also, TOH11 is one of earthquakes which was observed both by seismic observation network and tsunami observation network around the Japanese islands. Therefore, for the purpose of analyzing the nature of tsunami generation, we try to utilize tsunami waveform data as much as possible. In our previous studies of TOH11 (Fujihara et al., 2012a; Fujihara et al., 2012b), we inverted tsunami waveforms at GPS wave gauges of NOWPHAS to image the spatio-temporal slip distribution. The "temporal" nature of our tsunami source model is generally consistent with the other tsunami source models (e.g., Satake et al, 2013). For seismic waveform inversion based on 1-D structure, here we inverted broadband seismograms at GSN stations based on the teleseismic body-wave inversion scheme (Kikuchi and Kanamori, 2003). Also, for seismic waveform inversion considering the inhomogeneous internal structure, we inverted strong motion seismograms at K-NET and KiK-net stations, based on 3-D Green's functions (Fujihara et al., 2013a; Fujihara et al., 2013b). The gross "temporal" nature of our seismic source models are generally consistent with the other seismic source models (e.g., Yoshida et al., 2011; Ide at al., 2011; Yagi and Fukahata, 2011; Suzuki et al., 2011). The comparison of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms, suggested that there was the time period common to both seismic wave generation and tsunami generation followed by the time period unique to tsunami generation. At this point, we think that comparison of the absolute values of moment rates is not so meaningful between tsunami waveform inversion and seismic waveform inversion, because of general ambiguity of rigidity values of each subfault in the fault region (assuming the rigidity value of 30 GPa of Yoshida et al (2011)). Considering this, the normalized value of moment rate function was also evaluated and it does not change the general feature of two moment rate functions in terms of duration property. Furthermore, the results suggested that tsunami generation process apparently took more time than seismic wave generation process did. Tsunami can be generated even by "extra" motions resulting from many suggested abnormal mechanisms. These extra motions may be attribute to the relatively larger-scale tsunami generation than expected from the magnitude level from seismic ground motion, and attribute to the longer duration of tsunami generation process.

  18. How much does geometry of seismic sources matter in tsunami modeling? A sensitivity analysis for the Calabrian subduction interface

    NASA Astrophysics Data System (ADS)

    Tonini, R.; Maesano, F. E.; Tiberti, M. M.; Romano, F.; Scala, A.; Lorito, S.; Volpe, M.; Basili, R.

    2017-12-01

    The geometry of seismogenic sources could be one of the most important factors concurring to control the generation and the propagation of earthquake-generated tsunamis and their effects on the coasts. Since the majority of potentially tsunamigenic earthquakes occur offshore, the corresponding faults are generally poorly constrained and, consequently, their geometry is often oversimplified as a planar fault. The rupture area of mega-thrust earthquakes in subduction zones, where most of the greatest tsunamis have occurred, extends for tens to hundreds of kilometers both down dip and along strike, and generally deviates from the planar geometry. Therefore, the larger the earthquake size is, the weaker the planar fault assumption become. In this work, we present a sensitivity analysis aimed to explore the effects on modeled tsunamis generated by seismic sources with different degrees of geometric complexities. We focused on the Calabrian subduction zone, located in the Mediterranean Sea, which is characterized by the convergence between the African and European plates, with rates of up to 5 mm/yr. This subduction zone has been considered to have generated some past large earthquakes and tsunamis, despite it shows only in-slab significant seismic activity below 40 km depth and no relevant seismicity in the shallower portion of the interface. Our analysis is performed by defining and modeling an exhaustive set of tsunami scenarios located in the Calabrian subduction and using different models of the subduction interface with increasing geometrical complexity, from a planar surface to a highly detailed 3D surface. The latter was obtained from the interpretation of a dense network of seismic reflection profiles coupled with the analysis of the seismicity distribution. The more relevant effects due to the inclusion of 3D complexities in the seismic source geometry are finally highlighted in terms of the resulting tsunami impact.

  19. Regularized wave equation migration for imaging and data reconstruction

    NASA Astrophysics Data System (ADS)

    Kaplan, Sam T.

    The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are expensive, encouraging their parallel implementation. For the source-receiver parameterization of the scattering potential this parallelization is non-trivial. Seismic data is typically corrupted by various types of noise. Sparse coding can be used to suppress noise prior to migration. It is a method that stems from information theory and that we apply to noise suppression in seismic data.

  20. Seismic Imaging of the Source Physics Experiment Site with the Large-N Seismic Array

    NASA Astrophysics Data System (ADS)

    Chen, T.; Snelson, C. M.; Mellors, R. J.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of chemical explosions at the Nevada National Security Site. The goal of SPE is to understand seismic wave generation and propagation from these explosions. To achieve this goal, we need an accurate geophysical model of the SPE site. A Large-N seismic array that was deployed at the SPE site during one of the chemical explosions (SPE-5) helps us construct high-resolution local geophysical model. The Large-N seismic array consists of 996 geophones, and covers an area of approximately 2 × 2.5 km. The array is located in the northern end of the Yucca Flat basin, at a transition from Climax Stock (granite) to Yucca Flat (alluvium). In addition to the SPE-5 explosion, the Large-N array also recorded 53 weight drops. Using the Large-N seismic array recordings, we perform body wave and surface wave velocity analysis, and obtain 3D seismic imaging of the SPE site for the top crust of approximately 1 km. The imaging results show clear variation of geophysical parameter with local geological structures, including heterogeneous weathering layer and various rock types. The results of this work are being incorporated in the larger 3D modeling effort of the SPE program to validate the predictive models developed for the site.

  1. Relocation of Groningen seismicity using refracted waves

    NASA Astrophysics Data System (ADS)

    Ruigrok, E.; Trampert, J.; Paulssen, H.; Dost, B.

    2015-12-01

    The Groningen gas field is a giant natural gas accumulation in the Northeast of the Netherlands. The gas is in a reservoir at a depth of about 3 km. The naturally-fractured gas-filled sandstone extends roughly 45 by 25 km laterally and 140 m vertically. Decades of production have led to significant compaction of the sandstone. The (differential) compaction is thought to have reactivated existing faults and being the main driver of induced seismicity. Precise earthquake location is difficult due to a complicated subsurface, and that is the likely reason, the current hypocentre estimates do not clearly correlate with the well-known fault network. The seismic velocity model down to reservoir depth is quite well known from extensive seismic surveys and borehole data. Most to date earthquake detections, however, were made with a sparse pre-2015 seismic network. For shallow seismicity (<5 km depth) horizontal source-receiver distances tend to be much larger than vertical distances. Consequently, preferred source-receiver travel paths are refractions over high-velocity layers below the reservoir. However, the seismic velocities of layers below the reservoir are poorly known. We estimated an effective velocity model of the main refracting layer below the reservoir and use this for relocating past seismicity. We took advantage of vertical-borehole recordings for estimating precise P-wave (refraction) onset times and used a tomographic approach to find the laterally varying velocity field of the refracting layer. This refracting layer is then added to the known velocity model, and the combined model is used to relocate the past seismicity. From the resulting relocations we assess which of the faults are being reactivated.

  2. Ground-motion modeling of the 1906 San Francisco earthquake, part I: Validation using the 1989 Loma Prieta earthquake

    USGS Publications Warehouse

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.

    2008-01-01

    We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.

  3. Accurate estimation of seismic source parameters of induced seismicity by a combined approach of generalized inversion and genetic algorithm: Application to The Geysers geothermal area, California

    NASA Astrophysics Data System (ADS)

    Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.

    2017-05-01

    The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.

  4. Change-point detection of induced and natural seismicity

    NASA Astrophysics Data System (ADS)

    Fiedler, B.; Holschneider, M.; Zoeller, G.; Hainzl, S.

    2016-12-01

    Earthquake rates are influenced by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic sources. While the first two sources can be well modeled due to the fact that the source is known, transient aseismic processes are more difficult to detect. However, the detection of the associated changes of the earthquake activity is of great interest, because it might help to identify natural aseismic deformation patterns (such as slow slip events) and the occurrence of induced seismicity related to human activities. We develop a Bayesian approach to detect change-points in seismicity data which are modeled by Poisson processes. By means of a Likelihood-Ratio-Test, we proof the significance of the change of the intensity. The model is also extended to spatiotemporal data to detect the area of the transient changes. The method is firstly tested for synthetic data and then applied to observational data from central US and the Bardarbunga volcano in Iceland.

  5. A Fusion Model of Seismic and Hydro-Acoustic Propagation for Treaty Monitoring

    NASA Astrophysics Data System (ADS)

    Arora, Nimar; Prior, Mark

    2014-05-01

    We present an extension to NET-VISA (Network Processing Vertically Integrated Seismic Analysis), which is a probabilistic generative model of the propagation of seismic waves and their detection on a global scale, to incorporate hydro-acoustic data from the IMS (International Monitoring System) network. The new model includes the coupling of seismic waves into the ocean's SOFAR channel, as well as the propagation of hydro-acoustic waves from underwater explosions. The generative model is described in terms of multiple possible hypotheses -- seismic-to-hydro-acoustic, under-water explosion, other noise sources such as whales singing or icebergs breaking up -- that could lead to signal detections. We decompose each hypothesis into conditional probability distributions that are carefully analyzed and calibrated. These distributions include ones for detection probabilities, blockage in the SOFAR channel (including diffraction, refraction, and reflection around obstacles), energy attenuation, and other features of the resulting waveforms. We present a study of the various features that are extracted from the hydro-acoustic waveforms, and their correlations with each other as well the source of the energy. Additionally, an inference algorithm is presented that concurrently infers the seismic and under-water events, and associates all arrivals (aka triggers), both from seismic and hydro-acoustic stations, to the appropriate event, and labels the path taken by the wave. Finally, our results demonstrate that this fusion of seismic and hydro-acoustic data leads to very good performance. A majority of the under-water events that IDC (International Data Center) analysts built in 2010 are correctly located, and the arrivals that correspond to seismic-to-hydroacoustic coupling, the T phases, are mostly correctly identified. There is no loss in the accuracy of seismic events, in fact, there is a slight overall improvement.

  6. Seismological investigation of earthquakes in the New Madrid Seismic Zone. Final report, September 1986--December 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrmann, R.B.; Nguyen, B.

    Earthquake activity in the New Madrid Seismic Zone had been monitored by regional seismic networks since 1975. During this time period, over 3,700 earthquakes have been located within the region bounded by latitudes 35{degrees}--39{degrees}N and longitudes 87{degrees}--92{degrees}W. Most of these earthquakes occur within a 1.5{degrees} x 2{degrees} zone centered on the Missouri Bootheel. Source parameters of larger earthquakes in the zone and in eastern North America are determined using surface-wave spectral amplitudes and broadband waveforms for the purpose of determining the focal mechanism, source depth and seismic moment. Waveform modeling of broadband data is shown to be a powerful toolmore » in defining these source parameters when used complementary with regional seismic network data, and in addition, in verifying the correctness of previously published focal mechanism solutions.« less

  7. Application of Seismic Array Processing to Tsunami Early Warning

    NASA Astrophysics Data System (ADS)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800 instruments) and the Earthscope USArray Transportable Array (~400 instruments), are established.

  8. Microseismic monitoring of soft-rock landslide: contribution of a 3D velocity model for the location of seismic sources.

    NASA Astrophysics Data System (ADS)

    Floriane, Provost; Jean-Philippe, Malet; Cécile, Doubre; Julien, Gance; Alessia, Maggi; Agnès, Helmstetter

    2015-04-01

    Characterizing the micro-seismic activity of landslides is an important parameter for a better understanding of the physical processes controlling landslide behaviour. However, the location of the seismic sources on landslides is a challenging task mostly because of (a) the recording system geometry, (b) the lack of clear P-wave arrivals and clear wave differentiation, (c) the heterogeneous velocities of the ground. The objective of this work is therefore to test whether the integration of a 3D velocity model in probabilistic seismic source location codes improves the quality of the determination especially in depth. We studied the clay-rich landslide of Super-Sauze (French Alps). Most of the seismic events (rockfalls, slidequakes, tremors...) are generated in the upper part of the landslide near the main scarp. The seismic recording system is composed of two antennas with four vertical seismometers each located on the east and west sides of the seismically active part of the landslide. A refraction seismic campaign was conducted in August 2014 and a 3D P-wave model has been estimated using the Quasi-Newton tomography inversion algorithm. The shots of the seismic campaign are used as calibration shots to test the performance of the different location methods and to further update the 3D velocity model. Natural seismic events are detected with a semi-automatic technique using a frequency threshold. The first arrivals are picked using a kurtosis-based method and compared to the manual picking. Several location methods were finally tested. We compared a non-linear probabilistic method coupled with the 3D P-wave model and a beam-forming method inverted for an apparent velocity. We found that the Quasi-Newton tomography inversion algorithm provides results coherent with the original underlaying topography. The velocity ranges from 500 m.s-1 at the surface to 3000 m.s-1 in the bedrock. For the majority of the calibration shots, the use of a 3D velocity model significantly improve the results of the location procedure using P-wave arrivals. All the shots were made 50 centimeters below the surface and hence the vertical error could not be determined with the seismic campaign. We further discriminate the rockfalls and the slidequakes occurring on the landslide with the depth computed thanks to the 3D velocity model. This could be an additional criteria to automatically classify the events.

  9. Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns

    NASA Astrophysics Data System (ADS)

    Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar

    2014-05-01

    We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.

  10. Source Inversion of Seismic Events Associated with the Sinkhole at Napoleonville Salt Dome, Louisiana using a 3D Velocity Model

    NASA Astrophysics Data System (ADS)

    Nayak, Avinash; Dreger, Douglas S.

    2018-05-01

    The formation of a large sinkhole at the Napoleonville salt dome (NSD), Assumption Parish, Louisiana, caused by the collapse of a brine cavern, was accompanied by an intense and complex sequence of seismic events. We implement a grid-search approach to compute centroid locations and point-source moment tensor (MT) solutions of these seismic events using ˜0.1-0.3 Hz displacement waveforms and synthetic Green's functions computed using a 3D velocity model of the western edge of the NSD. The 3D model incorporates the currently known approximate geometry of the salt dome and the overlying anhydrite-gypsum cap rock, and features a large velocity contrast between the high velocity salt dome and low velocity sediments overlying and surrounding it. For each possible location on the source grid, Green's functions (GFs) to each station were computed using source-receiver reciprocity and the finite-difference seismic wave propagation software SW4. We also establish an empirical method to rigorously assess uncertainties in the centroid location, MW and source type of these events under evolving network geometry, using the results of synthetic tests with hypothetical events and real seismic noise. We apply the methods on the entire duration of data (˜6 months) recorded by the temporary US Geological Survey network. During an energetic phase of the sequence from 24-31 July 2012 when 4 stations were operational, the events with the best waveform fits are primarily located at the western edge of the salt dome at most probable depths of ˜0.3-0.85 km, close to the horizontal positions of the cavern and the future sinkhole. The data are fit nearly equally well by opening crack MTs in the high velocity salt medium or by isotropic volume-increase MTs in the low velocity sediment layers. We find that data recorded by 6 stations during 1-2 August 2012, right before the appearance of the sinkhole, indicate that some events are likely located in the lower velocity media just outside the salt dome at slightly shallower depth ˜0.35-0.65 km, with preferred isotropic volume-increase MT solutions. We find that GFs computed using the 3D velocity model generally result in better fits to the data than GFs computed using 1D velocity models, especially for the smaller amplitude tangential and vertical components, and result in better resolution of event locations. The dominant seismicity during 24-30 July 2012 is characterized by steady occurrence of seismic events with similar locations and MT solutions at a near-characteristic inter-event time. The steady activity is sometimes interrupted by tremor-like sequences of multiple events in rapid succession, followed by quiet periods of little of no seismic activity, in turn followed by the resumption of seismicity with a reduced seismic moment-release rate. The dominant volume-increase MT solutions and the steady features of the seismicity indicate a crack-valve-type source mechanism possibly driven by pressurized natural gas.

  11. Back-Projection Imaging of extended, diffuse seismic sources in volcanic and hydrothermal systems

    NASA Astrophysics Data System (ADS)

    Kelly, C. L.; Lawrence, J. F.; Beroza, G. C.

    2017-12-01

    Volcanic and hydrothermal systems exhibit a wide range of seismicity that is directly linked to fluid and volatile activity in the subsurface and that can be indicative of imminent hazardous activity. Seismograms recorded near volcanic and hydrothermal systems typically contain "noisy" records, but in fact, these complex signals are generated by many overlapping low-magnitude displacements and pressure changes at depth. Unfortunately, excluding times of high-magnitude eruptive activity that typically occur infrequently relative to the length of a system's entire eruption cycle, these signals often have very low signal-to-noise ratios and are difficult to identify and study using established seismic analysis techniques (i.e. phase-picking, template matching). Arrays of short-period and broadband seismic sensors are proven tools for monitoring short- and long-term changes in volcanic and hydrothermal systems. Time-reversal techniques (i.e. back-projection) that are improved by additional seismic observations have been successfully applied to locating volcano-seismic sources recorded by dense sensor arrays. We present results from a new computationally efficient back-projection method that allows us to image the evolution of extended, diffuse sources of volcanic and hydrothermal seismicity. We correlate short time-window seismograms from receiver-pairs to find coherent signals and propagate them back in time to potential source locations in a 3D subsurface model. The strength of coherent seismic signal associated with any potential source-receiver-receiver geometry is equal to the correlation of the short time-windows of seismic records at appropriate time lags as determined by the velocity structure and ray paths. We stack (sum) all short time-window correlations from all receiver-pairs to determine the cumulative coherence of signals at each potential source location. Through stacking, coherent signals from extended and/or repeating sources of short-period energy radiation interfere constructively while background noise signals interfere destructively, such that the most likely source locations of the observed seismicity are illuminated. We compile results to analyze changes in the distribution and prevalence of these sources throughout a systems entire eruptive cycle.

  12. Development of 3-axis precise positioning seismic physical modeling system in the simulation of marine seismic exploration

    NASA Astrophysics Data System (ADS)

    Kim, D.; Shin, S.; Ha, J.; Lee, D.; Lim, Y.; Chung, W.

    2017-12-01

    Seismic physical modeling is a laboratory-scale experiment that deals with the actual and physical phenomena that may occur in the field. In seismic physical modeling, field conditions are downscaled and used. For this reason, even a small error may lead to a big error in an actual field. Accordingly, the positions of the source and the receiver must be precisely controlled in scale modeling. In this study, we have developed a seismic physical modeling system capable of precisely controlling the 3-axis position. For automatic and precise position control of an ultrasonic transducer(source and receiver) in the directions of the three axes(x, y, and z), a motor was mounted on each of the three axes. The motor can automatically and precisely control the positions with positional precision of 2''; for the x and y axes and 0.05 mm for the z axis. As it can automatically and precisely control the positions in the directions of the three axes, it has an advantage in that simulations can be carried out using the latest exploration techniques, such as OBS and Broadband Seismic. For the signal generation section, a waveform generator that can produce a maximum of two sources was used, and for the data acquisition section, which receives and stores reflected signals, an A/D converter that can receive a maximum of four signals was used. As multiple sources and receivers could be used at the same time, the system was set up in such a way that diverse exploration methods, such as single channel, multichannel, and 3-D exploration, could be realized. A computer control program based on LabVIEW was created, so that it could control the position of the transducer, determine the data acquisition parameters, and check the exploration data and progress in real time. A marine environment was simulated using a water tank 1 m wide, 1 m long, and 0.9 m high. To evaluate the performance and applicability of the seismic physical modeling system developed in this study, single channel and multichannel explorations were carried out in the marine environment and the accuracy of the modeling system was verified by comparatively analyzing the exploration data and the numerical modeling data acquired.

  13. The source mechanisms of low frequency events in volcanoes - a comparison of synthetic and real seismic data on Soufriere Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Karl, S.; Neuberg, J. W.

    2012-04-01

    Low frequency seismic signals are one class of volcano seismic earthquakes that have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements. Amongst others, Neuberg et al. (2006) proposed a conceptual model for the trigger of low frequency events at Montserrat involving the brittle failure of magma in the glass transition in response to high shear stresses during the upwards movement of magma in the volcanic edifice. For this study, synthetic seismograms were generated following the proposed concept of Neuberg et al. (2006) by using an extended source modelled as an octagonal arrangement of double couples approximating a circular ringfault. For comparison, synthetic seismograms were generated using single forces only. For both scenarios, synthetic seismograms were generated using a seismic station distribution as encountered on Soufriere Hills Volcano, Montserrat. To gain a better quantitative understanding of the driving forces of low frequency events, inversions for the physical source mechanisms have become increasingly common. Therefore, we perform moment tensor inversions (Dreger, 2003) using the synthetic data as well as a chosen set of seismograms recorded on Soufriere Hills Volcano. The inversions are carried out under the (wrong) assumption to have an underlying point source rather than an extended source as the trigger mechanism of the low frequency seismic events. We will discuss differences between inversion results, and how to interpret the moment tensor components (double couple, isotropic, or CLVD), which were based on a point source, in terms of an extended source.

  14. Deviant Earthquakes: Data-driven Constraints on the Variability in Earthquake Source Properties and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel Taylor

    The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of southern California seismicity. Chapter 6 builds upon these results and applies the same spectral decomposition technique to examine the source properties of several thousand recent earthquakes in southern Kansas that are likely human-induced by massive oil and gas operations in the region. Chapter 7 studies the connection between source spectral properties and earthquake hazard, focusing on spatial variations in dynamic stress drop and its influence on ground motion amplitudes. Finally, Chapter 8 provides a summary of the key findings of and relations between these studies, and outlines potential avenues of future research.

  15. Seismic equivalents of volcanic jet scaling laws and multipoles in acoustics

    NASA Astrophysics Data System (ADS)

    Haney, Matthew M.; Matoza, Robin S.; Fee, David; Aldridge, David F.

    2018-04-01

    We establish analogies between equivalent source theory in seismology (moment-tensor and single-force sources) and acoustics (monopoles, dipoles and quadrupoles) in the context of volcanic eruption signals. Although infrasound (acoustic waves < 20 Hz) from volcanic eruptions may be more complex than a simple monopole, dipole or quadrupole assumption, these elementary acoustic sources are a logical place to begin exploring relations with seismic sources. By considering the radiated power of a harmonic force source at the surface of an elastic half-space, we show that a volcanic jet or plume modelled as a seismic force has similar scaling with respect to eruption parameters (e.g. exit velocity and vent area) as an acoustic dipole. We support this by demonstrating, from first principles, a fundamental relationship that ties together explosion, torque and force sources in seismology and highlights the underlying dipole nature of seismic forces. This forges a connection between the multipole expansion of equivalent sources in acoustics and the use of forces and moments as equivalent sources in seismology. We further show that volcanic infrasound monopole and quadrupole sources exhibit scalings similar to seismicity radiated by volume injection and moment sources, respectively. We describe a scaling theory for seismic tremor during volcanic eruptions that agrees with observations showing a linear relation between radiated power of tremor and eruption rate. Volcanic tremor over the first 17 hr of the 2016 eruption at Pavlof Volcano, Alaska, obeyed the linear relation. Subsequent tremor during the main phase of the eruption did not obey the linear relation and demonstrates that volcanic eruption tremor can exhibit other scalings even during the same eruption.

  16. Volcano seismology

    USGS Publications Warehouse

    Chouet, B.

    2003-01-01

    A fundamental goal of volcano seismology is to understand active magmatic systems, to characterize the configuration of such systems, and to determine the extent and evolution of source regions of magmatic energy. Such understanding is critical to our assessment of eruptive behavior and its hazardous impacts. With the emergence of portable broadband seismic instrumentation, availability of digital networks with wide dynamic range, and development of new powerful analysis techniques, rapid progress is being made toward a synthesis of high-quality seismic data to develop a coherent model of eruption mechanics. Examples of recent advances are: (1) high-resolution tomography to image subsurface volcanic structures at scales of a few hundred meters; (2) use of small-aperture seismic antennas to map the spatio-temporal properties of long-period (LP) seismicity; (3) moment tensor inversions of very-long-period (VLP) data to derive the source geometry and mass-transport budget of magmatic fluids; (4) spectral analyses of LP events to determine the acoustic properties of magmatic and associated hydrothermal fluids; and (5) experimental modeling of the source dynamics of volcanic tremor. These promising advances provide new insights into the mechanical properties of volcanic fluids and subvolcanic mass-transport dynamics. As new seismic methods refine our understanding of seismic sources, and geochemical methods better constrain mass balance and magma behavior, we face new challenges in elucidating the physico-chemical processes that cause volcanic unrest and its seismic and gas-discharge manifestations. Much work remains to be done toward a synthesis of seismological, geochemical, and petrological observations into an integrated model of volcanic behavior. Future important goals must include: (1) interpreting the key types of magma movement, degassing and boiling events that produce characteristic seismic phenomena; (2) characterizing multiphase fluids in subvolcanic regimes and determining their physical and chemical properties; and (3) quantitatively understanding multiphase fluid flow behavior under dynamic volcanic conditions. To realize these goals, not only must we learn how to translate seismic observations into quantitative information about fluid dynamics, but we also must determine the underlying physics that governs vesiculation, fragmentation, and the collapse of bubble-rich suspensions to form separate melt and vapor. Refined understanding of such processes-essential for quantitative short-term eruption forecasts-will require multidisciplinary research involving detailed field measurements, laboratory experiments, and numerical modeling.

  17. Coupled Hydrodynamic and Wave Propagation Modeling for the Source Physics Experiment: Study of Rg Wave Sources for SPE and DAG series.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Delorey, A.; Rougier, E.; Knight, E. E.; Steedman, D. W.; Bradley, C. R.

    2017-12-01

    This presentation reports numerical modeling efforts to improve knowledge of the processes that affect seismic wave generation and propagation from underground explosions, with a focus on Rg waves. The numerical model is based on the coupling of hydrodynamic simulation codes (Abaqus, CASH and HOSS), with a 3D full waveform propagation code, SPECFEM3D. Validation datasets are provided by the Source Physics Experiment (SPE) which is a series of highly instrumented chemical explosions at the Nevada National Security Site with yields from 100kg to 5000kg. A first series of explosions in a granite emplacement has just been completed and a second series in alluvium emplacement is planned for 2018. The long-term goal of this research is to review and improve current existing seismic sources models (e.g. Mueller & Murphy, 1971; Denny & Johnson, 1991) by providing first principles calculations provided by the coupled codes capability. The hydrodynamic codes, Abaqus, CASH and HOSS, model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. A new material model for unconsolidated alluvium materials has been developed and validated with past nuclear explosions, including the 10 kT 1965 Merlin event (Perret, 1971) ; Perret and Bass, 1975). We use the efficient Spectral Element Method code, SPECFEM3D (e.g. Komatitsch, 1998; 2002), and Geologic Framework Models to model the evolution of wavefield as it propagates across 3D complex structures. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. We will present validation tests and waveforms modeled for several SPE tests which provide evidence that the damage processes happening in the vicinity of the explosions create secondary seismic sources. These sources interfere with the original explosion moment and reduces the apparent seismic moment at the origin of Rg waves up to 20%.

  18. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  19. Passive monitoring for near surface void detection using traffic as a seismic source

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Kuzma, H. A.; Rector, J.; Nazari, S.

    2009-12-01

    In this poster we present preliminary results based on our several field experiments in which we study seismic detection of voids using a passive array of surface geophones. The source of seismic excitation is vehicle traffic on nearby roads, which we model as a continuous line source of seismic energy. Our passive seismic technique is based on cross-correlation of surface wave fields and studying the resulting power spectra, looking for "shadows" caused by the scattering effect of a void. High frequency noise masks this effect in the time domain, so it is difficult to see on conventional traces. Our technique does not rely on phase distortions caused by small voids because they are generally too tiny to measure. Unlike traditional impulsive seismic sources which generate highly coherent broadband signals, perfect for resolving phase but too weak for resolving amplitude, vehicle traffic affords a high power signal a frequency range which is optimal for finding shallow structures. Our technique results in clear detections of an abandoned railroad tunnel and a septic tank. The ultimate goal of this project is to develop a technology for the simultaneous imaging of shallow underground structures and traffic monitoring near these structures.

  20. Source-Type Identification Analysis Using Regional Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Chiang, A.; Dreger, D. S.; Ford, S. R.; Walter, W. R.

    2012-12-01

    Waveform inversion to determine the seismic moment tensor is a standard approach in determining the source mechanism of natural and manmade seismicity, and may be used to identify, or discriminate different types of seismic sources. The successful applications of the regional moment tensor method at the Nevada Test Site (NTS) and the 2006 and 2009 North Korean nuclear tests (Ford et al., 2009a, 2009b, 2010) show that the method is robust and capable for source-type discrimination at regional distances. The well-separated populations of explosions, earthquakes and collapses on a Hudson et al., (1989) source-type diagram enables source-type discrimination; however the question remains whether or not the separation of events is universal in other regions, where we have limited station coverage and knowledge of Earth structure. Ford et al., (2012) have shown that combining regional waveform data and P-wave first motions removes the CLVD-isotropic tradeoff and uniquely discriminating the 2009 North Korean test as an explosion. Therefore, including additional constraints from regional and teleseismic P-wave first motions enables source-type discrimination at regions with limited station coverage. We present moment tensor analysis of earthquakes and explosions (M6) from Lop Nor and Semipalatinsk test sites for station paths crossing Kazakhstan and Western China. We also present analyses of smaller events from industrial sites. In these sparse coverage situations we combine regional long-period waveforms, and high-frequency P-wave polarity from the same stations, as well as from teleseismic arrays to constrain the source type. Discrimination capability with respect to velocity model and station coverage is examined, and additionally we investigate the velocity model dependence of vanishing free-surface traction effects on seismic moment tensor inversion of shallow sources and recovery of explosive scalar moment. Our synthetic data tests indicate that biases in scalar seismic moment and discrimination for shallow sources are small and can be understood in a systematic manner. We are presently investigating the frequency dependence of vanishing traction of a very shallow (10m depth) M2+ chemical explosion recorded at several kilometer distances, and preliminary results indicate at the typical frequency passband we employ the bias does not affect our ability to retrieve the correct source mechanism but may affect the retrieval of the correct scalar seismic moment. Finally, we assess discrimination capability in a composite P-value statistical framework.

  1. Identifying bubble collapse in a hydrothermal system using hidden Markov models

    USGS Publications Warehouse

    Dawson, P.B.; Benitez, M.C.; Lowenstern, J. B.; Chouet, B.A.

    2012-01-01

    Beginning in July 2003 and lasting through September 2003, the Norris Geyser Basin in Yellowstone National Park exhibited an unusual increase in ground temperature and hydrothermal activity. Using hidden Markov model theory, we identify over five million high-frequency (>15Hz) seismic events observed at a temporary seismic station deployed in the basin in response to the increase in hydrothermal activity. The source of these seismic events is constrained to within ???100 m of the station, and produced ???3500-5500 events per hour with mean durations of ???0.35-0.45s. The seismic event rate, air temperature, hydrologic temperatures, and surficial water flow of the geyser basin exhibited a marked diurnal pattern that was closely associated with solar thermal radiance. We interpret the source of the seismicity to be due to the collapse of small steam bubbles in the hydrothermal system, with the rate of collapse being controlled by surficial temperatures and daytime evaporation rates. copyright 2012 by the American Geophysical Union.

  2. Identifying bubble collapse in a hydrothermal system using hiddden Markov models

    USGS Publications Warehouse

    Dawson, Phillip B.; Benitez, M.C.; Lowenstern, Jacob B.; Chouet, Bernard A.

    2012-01-01

    Beginning in July 2003 and lasting through September 2003, the Norris Geyser Basin in Yellowstone National Park exhibited an unusual increase in ground temperature and hydrothermal activity. Using hidden Markov model theory, we identify over five million high-frequency (>15 Hz) seismic events observed at a temporary seismic station deployed in the basin in response to the increase in hydrothermal activity. The source of these seismic events is constrained to within ~100 m of the station, and produced ~3500–5500 events per hour with mean durations of ~0.35–0.45 s. The seismic event rate, air temperature, hydrologic temperatures, and surficial water flow of the geyser basin exhibited a marked diurnal pattern that was closely associated with solar thermal radiance. We interpret the source of the seismicity to be due to the collapse of small steam bubbles in the hydrothermal system, with the rate of collapse being controlled by surficial temperatures and daytime evaporation rates.

  3. Earthquake Source Inversion Blindtest: Initial Results and Further Developments

    NASA Astrophysics Data System (ADS)

    Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.

    2007-12-01

    Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.

  4. Elastic parabolic equation solutions for underwater acoustic problems using seismic sources.

    PubMed

    Frank, Scott D; Odom, Robert I; Collis, Jon M

    2013-03-01

    Several problems of current interest involve elastic bottom range-dependent ocean environments with buried or earthquake-type sources, specifically oceanic T-wave propagation studies and interface wave related analyses. Additionally, observed deep shadow-zone arrivals are not predicted by ray theoretic methods, and attempts to model them with fluid-bottom parabolic equation solutions suggest that it may be necessary to account for elastic bottom interactions. In order to study energy conversion between elastic and acoustic waves, current elastic parabolic equation solutions must be modified to allow for seismic starting fields for underwater acoustic propagation environments. Two types of elastic self-starter are presented. An explosive-type source is implemented using a compressional self-starter and the resulting acoustic field is consistent with benchmark solutions. A shear wave self-starter is implemented and shown to generate transmission loss levels consistent with the explosive source. Source fields can be combined to generate starting fields for source types such as explosions, earthquakes, or pile driving. Examples demonstrate the use of source fields for shallow sources or deep ocean-bottom earthquake sources, where down slope conversion, a known T-wave generation mechanism, is modeled. Self-starters are interpreted in the context of the seismic moment tensor.

  5. A new view for the geodynamics of Ecuador: Implication in seismogenic source definition and seismic hazard assessment

    NASA Astrophysics Data System (ADS)

    Yepes, Hugo; Audin, Laurence; Alvarado, Alexandra; Beauval, Céline; Aguilar, Jorge; Font, Yvonne; Cotton, Fabrice

    2016-05-01

    A new view of Ecuador's complex geodynamics has been developed in the course of modeling seismic source zones for probabilistic seismic hazard analysis. This study focuses on two aspects of the plates' interaction at a continental scale: (a) age-related differences in rheology between Farallon and Nazca plates—marked by the Grijalva rifted margin and its inland projection—as they subduct underneath central Ecuador, and (b) the rapidly changing convergence obliquity resulting from the convex shape of the South American northwestern continental margin. Both conditions satisfactorily explain several characteristics of the observed seismicity and of the interseismic coupling. Intermediate-depth seismicity reveals a severe flexure in the Farallon slab as it dips and contorts at depth, originating the El Puyo seismic cluster. The two slabs position and geometry below continental Ecuador also correlate with surface expressions observable in the local and regional geology and tectonics. The interseismic coupling is weak and shallow south of the Grijalva rifted margin and increases northward, with a heterogeneous pattern locally associated to the Carnegie ridge subduction. High convergence obliquity is responsible for the North Andean Block northeastward movement along localized fault systems. The Cosanga and Pallatanga fault segments of the North Andean Block-South American boundary concentrate most of the seismic moment release in continental Ecuador. Other inner block faults located along the western border of the inter-Andean Depression also show a high rate of moderate-size earthquake production. Finally, a total of 19 seismic source zones were modeled in accordance with the proposed geodynamic and neotectonic scheme.

  6. Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis

    NASA Astrophysics Data System (ADS)

    Woo, Gordon

    2017-04-01

    For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.

  7. Zephyr: Open-source Parallel Seismic Waveform Inversion in an Integrated Python-based Framework

    NASA Astrophysics Data System (ADS)

    Smithyman, B. R.; Pratt, R. G.; Hadden, S. M.

    2015-12-01

    Seismic Full-Waveform Inversion (FWI) is an advanced method to reconstruct wave properties of materials in the Earth from a series of seismic measurements. These methods have been developed by researchers since the late 1980s, and now see significant interest from the seismic exploration industry. As researchers move towards implementing advanced numerical modelling (e.g., 3D, multi-component, anisotropic and visco-elastic physics), it is desirable to make use of a modular approach, minimizing the effort developing a new set of tools for each new numerical problem. SimPEG (http://simpeg.xyz) is an open source project aimed at constructing a general framework to enable geophysical inversion in various domains. In this abstract we describe Zephyr (https://github.com/bsmithyman/zephyr), which is a coupled research project focused on parallel FWI in the seismic context. The software is built on top of Python, Numpy and IPython, which enables very flexible testing and implementation of new features. Zephyr is an open source project, and is released freely to enable reproducible research. We currently implement a parallel, distributed seismic forward modelling approach that solves the 2.5D (two-and-one-half dimensional) viscoacoustic Helmholtz equation at a range modelling frequencies, generating forward solutions for a given source behaviour, and gradient solutions for a given set of observed data. Solutions are computed in a distributed manner on a set of heterogeneous workers. The researcher's frontend computer may be separated from the worker cluster by a network link to enable full support for computation on remote clusters from individual workstations or laptops. The present codebase introduces a numerical discretization equivalent to that used by FULLWV, a well-known seismic FWI research codebase. This makes it straightforward to compare results from Zephyr directly with FULLWV. The flexibility introduced by the use of a Python programming environment makes extension of the codebase with new methods much more straightforward. This enables comparison and integration of new efforts with existing results.

  8. Observation and modeling of source effects in coda wave interferometry at Pavlof volcano

    USGS Publications Warehouse

    Haney, M.M.; van, Wijik K.; Preston, L.A.; Aldridge, D.F.

    2009-01-01

    Sorting out source and path effects for seismic waves at volcanoes is critical for the proper interpretation of underlying volcanic processes. Source or path effects imply that seismic waves interact strongly with the volcanic subsurface, either through partial resonance in a conduit (Garces et al., 2000; Sturton and Neuberg, 2006) or by random scattering in the heterogeneous volcanic edifice (Wegler and Luhr, 2001). As a result, both source and path effects can cause seismic waves to repeatedly sample parts of the volcano, leading to enhanced sensitivity to small changes in material properties at those locations. The challenge for volcano seismologists is to detect and reliably interpret these subtle changes for the purpose of monitoring eruptions. ?? 2009 Society of Exploration Geophysicists.

  9. Assessment of macroseismic intensity in the Nile basin, Egypt

    NASA Astrophysics Data System (ADS)

    Fergany, Elsayed

    2018-01-01

    This work intends to assess deterministic seismic hazard and risk analysis in terms of the maximum expected intensity map of the Egyptian Nile basin sector. Seismic source zone model of Egypt was delineated based on updated compatible earthquake catalog in 2015, focal mechanisms, and the common tectonic elements. Four effective seismic source zones were identified along the Nile basin. The observed macroseismic intensity data along the basin was used to develop intensity prediction equation defined in terms of moment magnitude. Expected maximum intensity map was proven based on the developed intensity prediction equation, identified effective seismic source zones, and maximum expected magnitude for each zone along the basin. The earthquake hazard and risk analysis was discussed and analyzed in view of the maximum expected moment magnitude and the maximum expected intensity values for each effective source zone. Moderate expected magnitudes are expected to put high risk at Cairo and Aswan regions. The results of this study could be a recommendation for the planners in charge to mitigate the seismic risk at these strategic zones of Egypt.

  10. Probabilistic seismic hazard analysis for a nuclear power plant site in southeast Brazil

    NASA Astrophysics Data System (ADS)

    de Almeida, Andréia Abreu Diniz; Assumpção, Marcelo; Bommer, Julian J.; Drouet, Stéphane; Riccomini, Claudio; Prates, Carlos L. M.

    2018-05-01

    A site-specific probabilistic seismic hazard analysis (PSHA) has been performed for the only nuclear power plant site in Brazil, located 130 km southwest of Rio de Janeiro at Angra dos Reis. Logic trees were developed for both the seismic source characterisation and ground-motion characterisation models, in both cases seeking to capture the appreciable ranges of epistemic uncertainty with relatively few branches. This logic-tree structure allowed the hazard calculations to be performed efficiently while obtaining results that reflect the inevitable uncertainty in long-term seismic hazard assessment in this tectonically stable region. An innovative feature of the study is an additional seismic source zone added to capture the potential contributions of characteristics earthquake associated with geological faults in the region surrounding the coastal site.

  11. The Utility of the Extended Images in Ambient Seismic Wavefield Migration

    NASA Astrophysics Data System (ADS)

    Girard, A. J.; Shragge, J. C.

    2015-12-01

    Active-source 3D seismic migration and migration velocity analysis (MVA) are robust and highly used methods for imaging Earth structure. One class of migration methods uses extended images constructed by incorporating spatial and/or temporal wavefield correlation lags to the imaging conditions. These extended images allow users to directly assess whether images focus better with different parameters, which leads to MVA techniques that are based on the tenets of adjoint-state theory. Under certain conditions (e.g., geographical, cultural or financial), however, active-source methods can prove impractical. Utilizing ambient seismic energy that naturally propagates through the Earth is an alternate method currently used in the scientific community. Thus, an open question is whether extended images are similarly useful for ambient seismic migration processing and verifying subsurface velocity models, and whether one can similarly apply adjoint-state methods to perform ambient migration velocity analysis (AMVA). Herein, we conduct a number of numerical experiments that construct extended images from ambient seismic recordings. We demonstrate that, similar to active-source methods, there is a sensitivity to velocity in ambient seismic recordings in the migrated extended image domain. In synthetic ambient imaging tests with varying degrees of error introduced to the velocity model, the extended images are sensitive to velocity model errors. To determine the extent of this sensitivity, we utilize acoustic wave-equation propagation and cross-correlation-based migration methods to image weak body-wave signals present in the recordings. Importantly, we have also observed scenarios where non-zero correlation lags show signal while zero-lags show none. This may be a valuable missing piece for ambient migration techniques that have yielded largely inconclusive results, and might be an important piece of information for performing AMVA from ambient seismic recordings.

  12. pySeismicFMM: Python based Travel Time Calculation in Regular 2D and 3D Grids in Cartesian and Geographic Coordinates using Fast Marching Method

    NASA Astrophysics Data System (ADS)

    Wilde-Piorko, M.; Polkowski, M.

    2016-12-01

    Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  13. Mechanism of the 2015 volcanic tsunami earthquake near Torishima, Japan

    PubMed Central

    Satake, Kenji

    2018-01-01

    Tsunami earthquakes are a group of enigmatic earthquakes generating disproportionally large tsunamis relative to seismic magnitude. These events occur most typically near deep-sea trenches. Tsunami earthquakes occurring approximately every 10 years near Torishima on the Izu-Bonin arc are another example. Seismic and tsunami waves from the 2015 event [Mw (moment magnitude) = 5.7] were recorded by an offshore seafloor array of 10 pressure gauges, ~100 km away from the epicenter. We made an array analysis of dispersive tsunamis to locate the tsunami source within the submarine Smith Caldera. The tsunami simulation from a large caldera-floor uplift of ~1.5 m with a small peripheral depression yielded waveforms remarkably similar to the observations. The estimated central uplift, 1.5 m, is ~20 times larger than that inferred from the seismologically determined non–double-couple source. Thus, the tsunami observation is not compatible with the published seismic source model taken at face value. However, given the indeterminacy of Mzx, Mzy, and M{tensile} of a shallow moment tensor source, it may be possible to find a source mechanism with efficient tsunami but inefficient seismic radiation that can satisfactorily explain both the tsunami and seismic observations, but this question remains unresolved. PMID:29740604

  14. Mechanism of the 2015 volcanic tsunami earthquake near Torishima, Japan.

    PubMed

    Fukao, Yoshio; Sandanbata, Osamu; Sugioka, Hiroko; Ito, Aki; Shiobara, Hajime; Watada, Shingo; Satake, Kenji

    2018-04-01

    Tsunami earthquakes are a group of enigmatic earthquakes generating disproportionally large tsunamis relative to seismic magnitude. These events occur most typically near deep-sea trenches. Tsunami earthquakes occurring approximately every 10 years near Torishima on the Izu-Bonin arc are another example. Seismic and tsunami waves from the 2015 event [ M w (moment magnitude) = 5.7] were recorded by an offshore seafloor array of 10 pressure gauges, ~100 km away from the epicenter. We made an array analysis of dispersive tsunamis to locate the tsunami source within the submarine Smith Caldera. The tsunami simulation from a large caldera-floor uplift of ~1.5 m with a small peripheral depression yielded waveforms remarkably similar to the observations. The estimated central uplift, 1.5 m, is ~20 times larger than that inferred from the seismologically determined non-double-couple source. Thus, the tsunami observation is not compatible with the published seismic source model taken at face value. However, given the indeterminacy of M zx , M zy , and M {tensile} of a shallow moment tensor source, it may be possible to find a source mechanism with efficient tsunami but inefficient seismic radiation that can satisfactorily explain both the tsunami and seismic observations, but this question remains unresolved.

  15. A robust calibration technique for acoustic emission systems based on momentum transfer from a ball drop

    USGS Publications Warehouse

    McLaskey, Gregory C.; Lockner, David A.; Kilgore, Brian D.; Beeler, Nicholas M.

    2015-01-01

    We describe a technique to estimate the seismic moment of acoustic emissions and other extremely small seismic events. Unlike previous calibration techniques, it does not require modeling of the wave propagation, sensor response, or signal conditioning. Rather, this technique calibrates the recording system as a whole and uses a ball impact as a reference source or empirical Green’s function. To correctly apply this technique, we develop mathematical expressions that link the seismic moment $M_{0}$ of internal seismic sources (i.e., earthquakes and acoustic emissions) to the impulse, or change in momentum $\\Delta p $, of externally applied seismic sources (i.e., meteor impacts or, in this case, ball impact). We find that, at low frequencies, moment and impulse are linked by a constant, which we call the force‐moment‐rate scale factor $C_{F\\dot{M}} = M_{0}/\\Delta p$. This constant is equal to twice the speed of sound in the material from which the seismic sources were generated. Next, we demonstrate the calibration technique on two different experimental rock mechanics facilities. The first example is a saw‐cut cylindrical granite sample that is loaded in a triaxial apparatus at 40 MPa confining pressure. The second example is a 2 m long fault cut in a granite sample and deformed in a large biaxial apparatus at lower stress levels. Using the empirical calibration technique, we are able to determine absolute source parameters including the seismic moment, corner frequency, stress drop, and radiated energy of these magnitude −2.5 to −7 seismic events.

  16. Controlled-source seismic interferometry with one way wave fields

    NASA Astrophysics Data System (ADS)

    van der Neut, J.; Wapenaar, K.; Thorbecke, J. W.

    2008-12-01

    In Seismic Interferometry we generally cross-correlate registrations at two receiver locations and sum over an array of sources to retrieve a Green's function as if one of the receiver locations hosts a (virtual) source and the other receiver location hosts an actual receiver. One application of this concept is to redatum an area of surface sources to a downhole receiver location, without requiring information about the medium between the sources and receivers, thus providing an effective tool for imaging below complex overburden, which is also known as the Virtual Source method. We demonstrate how elastic wavefield decomposition can be effectively combined with controlled-source Seismic Interferometry to generate virtual sources in a downhole receiver array that radiate only down- or upgoing P- or S-waves with receivers sensing only down- or upgoing P- or S- waves. For this purpose we derive exact Green's matrix representations from a reciprocity theorem for decomposed wavefields. Required is the deployment of multi-component sources at the surface and multi- component receivers in a horizontal borehole. The theory is supported with a synthetic elastic model, where redatumed traces are compared with those of a directly modeled reflection response, generated by placing active sources at the virtual source locations and applying elastic wavefield decomposition on both source and receiver side.

  17. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  18. Newtonian noise and ambient ground motion for gravitational wave detectors

    NASA Astrophysics Data System (ADS)

    Beker, M. G.; van den Brand, J. F. J.; Hennes, E.; Rabeling, D. S.

    2012-06-01

    Fluctuations of the local gravitational field as a result of seismic and atmospheric displacements will limit the sensitivity of ground based gravitational wave detectors at frequencies below 10 Hz. We discuss the implications of Newtonian noise for future third generation gravitational wave detectors. The relevant seismic wave fields are predominately of human origin and are dependent on local infrastructure and population density. Seismic studies presented here show that considerable seismic noise reduction is possible compared to current detector locations. A realistic seismic amplitude spectral density of a suitably quiet site should not exceed 0.5 nm/(Hz/f)2 above 1 Hz. Newtonian noise models have been developed both analytically and by finite element analysis. These show that the contribution to Newtonian noise from surface waves due to distance sources significantly reduces with depth. Seismic displacements from local sources and body waves then become the dominant contributors to the Newtonian fluctuations.

  19. Landquake dynamics inferred from seismic source inversion: Greenland and Sichuan events of 2017

    NASA Astrophysics Data System (ADS)

    Chao, W. A.

    2017-12-01

    In June 2017 two catastrophic landquake events occurred in Greenland and Sichuan. The Greenland event leads to tsunami hazard in the small town of Nuugaarsiaq. A landquake in Sichuan hit the town, which resulted in over 100 death. Both two events generated the strong seismic signals recorded by the real-time global seismic network. I adopt an inversion algorithm to derive the landquake force time history (LFH) using the long-period waveforms, and the landslide volume ( 76 million m3) can be rapidly estimated, facilitating the tsunami-wave modeling for early warning purpose. Based on an integrated approach involving tsunami forward simulation and seismic waveform inversion, this study has significant implications to issuing actionable warnings before hazardous tsunami waves strike populated areas. Two single-forces (SFs) mechanism (two block model) yields the best explanation for Sichuan event, which demonstrates that secondary event (seismic inferred volume: 8.2 million m3) may be mobilized by collapse-mass hitting from initial rock avalanches ( 5.8 million m3), likely causing a catastrophic disaster. The later source with a force magnitude of 0.9967×1011 N occurred 70 seconds after first mass-movement occurrence. In contrast, first event has the smaller force magnitude of 0.8116×1011 N. In conclusion, seismically inferred physical parameters will substantially contribute to improving our understanding of landquake source mechanisms and mitigating similar hazards in other parts of the world.

  20. Evaluation of Ground Vibrations Induced by Military Noise Sources

    DTIC Science & Technology

    2006-08-01

    1 Task 2—Determine the acoustic -to-seismic coupling coefficients C1 and C2 ...................... 1 Task 3—Computational modeling ...Determine the acoustic -to-seismic coupling coefficients C1 and C2 ....................45 Task 3—Computational modeling of acoustically induced ground...ground conditions. Task 3—Computational modeling of acoustically induced ground motion The simple model of blast sound interaction with the

  1. Mechanical coupling between earthquakes and volcanoes inferred from stress transfer models: evidence from Vesuvio, Etna and Alban Hills (Italy)

    NASA Astrophysics Data System (ADS)

    Cocco, M.; Feuillet, N.; Nostro, C.; Musumeci, C.

    2003-04-01

    We investigate the mechanical interactions between tectonic faults and volcanic sources through elastic stress transfer and discuss the results of several applications to Italian active volcanoes. We first present the stress modeling results that point out a two-way coupling between Vesuvius eruptions and historical earthquakes in Southern Apennines, which allow us to provide a physical interpretation of their statistical correlation. Therefore, we explore the elastic stress interaction between historical eruptions at the Etna volcano and the largest earthquakes in Eastern Sicily and Calabria. We show that the large 1693 seismic event caused an increase of compressive stress along the rift zone, which can be associated to the lack of flank eruptions of the Etna volcano for about 70 years after the earthquake. Moreover, the largest Etna eruptions preceded by few decades the large 1693 seismic event. Our modeling results clearly suggest that all these catastrophic events are tectonically coupled. We also investigate the effect of elastic stress perturbations on the instrumental seismicity caused by magma inflation at depth both at the Etna and at the Alban Hills volcanoes. In particular, we model the seismicity pattern at the Alban Hills volcano (central Italy) during a seismic swarm occurred in 1989-90 and we interpret it in terms of Coulomb stress changes caused by magmatic processes in an extensional tectonic stress field. We verify that the earthquakes occur in areas of Coulomb stress increase and that their faulting mechanisms are consistent with the stress perturbation induced by the volcanic source. Our results suggest a link between faults and volcanic sources, which we interpret as a tectonic coupling explaining the seismicity in a large area surrounding the volcanoes.

  2. Advanced Seismic While Drilling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert Radtke; John Fontenot; David Glowka

    A breakthrough has been discovered for controlling seismic sources to generate selectable low frequencies. Conventional seismic sources, including sparkers, rotary mechanical, hydraulic, air guns, and explosives, by their very nature produce high-frequencies. This is counter to the need for long signal transmission through rock. The patent pending SeismicPULSER{trademark} methodology has been developed for controlling otherwise high-frequency seismic sources to generate selectable low-frequency peak spectra applicable to many seismic applications. Specifically, we have demonstrated the application of a low-frequency sparker source which can be incorporated into a drill bit for Drill Bit Seismic While Drilling (SWD). To create the methodology ofmore » a controllable low-frequency sparker seismic source, it was necessary to learn how to maximize sparker efficiencies to couple to, and transmit through, rock with the study of sparker designs and mechanisms for (a) coupling the sparker-generated gas bubble expansion and contraction to the rock, (b) the effects of fluid properties and dynamics, (c) linear and non-linear acoustics, and (d) imparted force directionality. After extensive seismic modeling, the design of high-efficiency sparkers, laboratory high frequency sparker testing, and field tests were performed at the University of Texas Devine seismic test site. The conclusion of the field test was that extremely high power levels would be required to have the range required for deep, 15,000+ ft, high-temperature, high-pressure (HTHP) wells. Thereafter, more modeling and laboratory testing led to the discovery of a method to control a sparker that could generate low frequencies required for deep wells. The low frequency sparker was successfully tested at the Department of Energy Rocky Mountain Oilfield Test Center (DOE RMOTC) field test site in Casper, Wyoming. An 8-in diameter by 26-ft long SeismicPULSER{trademark} drill string tool was designed and manufactured by TII. An APS Turbine Alternator powered the SeismicPULSER{trademark} to produce two Hz frequency peak signals repeated every 20 seconds. Since the ION Geophysical, Inc. (ION) seismic survey surface recording system was designed to detect a minimum downhole signal of three Hz, successful performance was confirmed with a 5.3 Hz recording with the pumps running. The two Hz signal generated by the sparker was modulated with the 3.3 Hz signal produced by the mud pumps to create an intense 5.3 Hz peak frequency signal. The low frequency sparker source is ultimately capable of generating selectable peak frequencies of 1 to 40 Hz with high-frequency spectra content to 10 kHz. The lower frequencies and, perhaps, low-frequency sweeps, are needed to achieve sufficient range and resolution for realtime imaging in deep (15,000 ft+), high-temperature (150 C) wells for (a) geosteering, (b) accurate seismic hole depth, (c) accurate pore pressure determinations ahead of the bit, (d) near wellbore diagnostics with a downhole receiver and wired drill pipe, and (e) reservoir model verification. Furthermore, the pressure of the sparker bubble will disintegrate rock resulting in an increased overall rates of penetration. Other applications for the SeismicPULSER{trademark} technology are to deploy a low-frequency source for greater range on a wireline for Reverse Vertical Seismic Profiling (RVSP) and Cross-Well Tomography. Commercialization of the technology is being undertaken by first contacting stakeholders to define the value proposition for rig site services utilizing SeismicPULSER{trademark} technologies. Stakeholders include national oil companies, independent oil companies, independents, service companies, and commercial investors. Service companies will introduce a new Drill Bit SWD service for deep HTHP wells. Collaboration will be encouraged between stakeholders in the form of joint industry projects to develop prototype tools and initial field trials. No barriers have been identified for developing, utilizing, and exploiting the low-frequency SeismicPULSER{trademark} source in a variety of applications. Risks will be minimized since Drill Bit SWD will not interfere with the drilling operation, and can be performed in a relatively quiet environment when the pumps are turned off. The new source must be integrated with other Measurement While Drilling (MWD) tools. To date, each of the oil companies and service companies contacted have shown interest in participating in the commercialization of the low-frequency SeismicPULSER{trademark} source. A technical paper has been accepted for presentation at the 2009 Offshore Technology Conference (OTC) in a Society of Exploration Geologists/American Association of Petroleum Geophysicists (SEG/AAPG) technical session.« less

  3. Deterministic seismic hazard macrozonation of India

    NASA Astrophysics Data System (ADS)

    Kolathayar, Sreevalsa; Sitharam, T. G.; Vipin, K. S.

    2012-10-01

    Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°-38°N and 68°-98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.

  4. Ambient Seismic Source Inversion in a Heterogeneous Earth: Theory and Application to the Earth's Hum

    NASA Astrophysics Data System (ADS)

    Ermert, Laura; Sager, Korbinian; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-11-01

    The sources of ambient seismic noise are extensively studied both to better understand their influence on ambient noise tomography and related techniques, and to infer constraints on their excitation mechanisms. Here we develop a gradient-based inversion method to infer the space-dependent and time-varying source power spectral density of the Earth's hum from cross correlations of continuous seismic data. The precomputation of wavefields using spectral elements allows us to account for both finite-frequency sensitivity and for three-dimensional Earth structure. Although similar methods have been proposed previously, they have not yet been applied to data to the best of our knowledge. We apply this method to image the seasonally varying sources of Earth's hum during North and South Hemisphere winter. The resulting models suggest that hum sources are localized, persistent features that occur at Pacific coasts or shelves and in the North Atlantic during North Hemisphere winter, as well as South Pacific coasts and several distinct locations in the Southern Ocean in South Hemisphere winter. The contribution of pelagic sources from the central North Pacific cannot be constrained. Besides improving the accuracy of noise source locations through the incorporation of finite-frequency effects and 3-D Earth structure, this method may be used in future cross-correlation waveform inversion studies to provide initial source models and source model updates.

  5. 3D basin structure of the Santa Clara Valley constrained by ambient noise tomography

    NASA Astrophysics Data System (ADS)

    Cho, H.; Lee, S. J.; Rhie, J.; Kim, S.

    2017-12-01

    The basin structure is an important factor controls the intensity and duration of ground shaking due to earthquake. Thus it is important to study the basin structure for better understanding seismic hazard and also improving the earthquake preparedness. An active source seismic survey is the most appropriate method to determine the basin structure in detail but its applicability, especially in urban areas, is limited. In this study, we tested the potential of an ambient noise tomography, which can be a cheaper and more easily applicable method compared to a traditional active source survey, to construct the velocity model of the basin. Our testing region is the Santa Clara Valley, which is one of the major urban sedimentary basins in the States. We selected this region because continuous seismic recordings and well defined velocity models are available. Continuous seismic recordings of 6 months from short-period array of Santa Clara Valley Seismic Experiment are cross-correlated with 1 hour time window. And the fast marching method and the subspace method are jointly applied to construct 2-D group velocity maps between 0.2 - 4.0 Hz. Then, shear wave velocity model of the Santa Clara Valley is calculated up to 5 km depth using bayesian inversion technique. Although our model cannot depict the detailed structures, it is roughly comparable with the velocity model of the US Geological Survey, which is constrained by active seismic surveys and field researches. This result indicate that an ambient noise tomography can be a replacement, at least in part, of an active seismic survey to construct the velocity model of the basin.

  6. Italian Case Studies Modelling Complex Earthquake Sources In PSHA

    NASA Astrophysics Data System (ADS)

    Gee, Robin; Peruzza, Laura; Pagani, Marco

    2017-04-01

    This study presents two examples of modelling complex seismic sources in Italy, done in the framework of regional probabilistic seismic hazard assessment (PSHA). The first case study is for an area centred around Collalto Stoccaggio, a natural gas storage facility in Northern Italy, located within a system of potentially seismogenic thrust faults in the Venetian Plain. The storage exploits a depleted natural gas reservoir located within an actively growing anticline, which is likely driven by the Montello Fault, the underlying blind thrust. This fault has been well identified by microseismic activity (M<2) detected by a local seismometric network installed in 2012 (http://rete-collalto.crs.inogs.it/). At this time, no correlation can be identified between the gas storage activity and local seismicity, so we proceed with a PSHA that considers only natural seismicity, where the rates of earthquakes are assumed to be time-independent. The source model consists of faults and distributed seismicity to consider earthquakes that cannot be associated to specific structures. All potentially active faults within 50 km of the site are considered, and are modelled as 3D listric surfaces, consistent with the proposed geometry of the Montello Fault. Slip rates are constrained using available geological, geophysical and seismological information. We explore the sensitivity of the hazard results to various parameters affected by epistemic uncertainty, such as ground motions prediction equations with different rupture-to-site distance metrics, fault geometry, and maximum magnitude. The second case is an innovative study, where we perform aftershock probabilistic seismic hazard assessment (APSHA) in Central Italy, following the Amatrice M6.1 earthquake of August 24th, 2016 (298 casualties) and the subsequent earthquakes of Oct 26th and 30th (M6.1 and M6.6 respectively, no deaths). The aftershock hazard is modelled using a fault source with complex geometry, based on literature data and field evidence associated with the August mainshock. Earthquake activity rates during the very first weeks after the deadly earthquake were used to calibrated an Omori-Utsu decay curve, and the magnitude distribution of aftershocks is assumed to follow a Gutenberg-Richter distribution. We apply uniform and non-uniform spatial distribution of the seismicity across the fault source, by modulating the rates as a decreasing function of distance from the mainshock. The hazard results are computed for short-exposure periods (1 month, before the occurrences of October earthquakes) and compared to the background hazard given by law (MPS04), and to observations at some reference sites. We also show the results of disaggregation computed for the city of Amatrice. Finally, we attempt to update the results in light of the new "main" events that occurred afterwards in the region. All source modeling and hazard calculations are performed using the OpenQuake engine. We discuss the novelties of these works, and the benefits and limitations of both analyses, particularly in such different contexts of seismic hazard.

  7. Evaluation of seismic spatial interaction effects through an impact testing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, B.D.; Driesen, G.E.

    The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous ``sources`` and ``targets`` requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to ``calibrate`` the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less

  8. Evaluation of seismic spatial interaction effects through an impact testing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, B.D.; Driesen, G.E.

    The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous sources'' and targets'' requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to calibrate'' the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less

  9. Seismoelectric imaging of shallow targets

    USGS Publications Warehouse

    Haines, S.S.; Pride, S.R.; Klemperer, S.L.; Biondi, B.

    2007-01-01

    We have undertaken a series of controlled field experiments to develop seismoelectric experimental methods for near-surface applications and to improve our understanding of seismoelectric phenomena. In a set of off-line geometry surveys (source separated from the receiver line), we place seismic sources and electrode array receivers on opposite sides of a man-made target (two sand-filled trenches) to record separately two previously documented seismoelectric modes: (1) the electromagnetic interface response signal created at the target and (2) the coseismic electric fields located within a compressional seismic wave. With the seismic source point in the center of a linear electrode array, we identify the previously undocumented seismoelectric direct field, and the Lorentz field of the metal hammer plate moving in the earth's magnetic field. We place the seismic source in the center of a circular array of electrodes (radial and circumferential orientations) to analyze the source-related direct and Lorentz fields and to establish that these fields can be understood in terms of simple analytical models. Using an off-line geometry, we create a multifold, 2D image of our trenches as dipping layers, and we also produce a complementary synthetic image through numerical modeling. These images demonstrate that off-line geometry (e.g., crosswell) surveys offer a particularly promising application of the seismoelectric method because they effectively separate the interface response signal from the (generally much stronger) coseismic and source-related fields. ?? 2007 Society of Exploration Geophysicists.

  10. Rate/state Coulomb stress transfer model for the CSEP Japan seismicity forecast

    NASA Astrophysics Data System (ADS)

    Toda, Shinji; Enescu, Bogdan

    2011-03-01

    Numerous studies retrospectively found that seismicity rate jumps (drops) by coseismic Coulomb stress increase (decrease). The Collaboratory for the Study of Earthquake Prediction (CSEP) instead provides us an opportunity for prospective testing of the Coulomb hypothesis. Here we adapt our stress transfer model incorporating rate and state dependent friction law to the CSEP Japan seismicity forecast. We demonstrate how to compute the forecast rates of large shocks in 2009 using the large earthquakes during the past 120 years. The time dependent impact of the coseismic stress perturbations explains qualitatively well the occurrence of the recent moderate size shocks. Such ability is partly similar to that of statistical earthquake clustering models. However, our model differs from them as follows: the off-fault aftershock zones can be simulated using finite fault sources; the regional areal patterns of triggered seismicity are modified by the dominant mechanisms of the potential sources; the imparted stresses due to large earthquakes produce stress shadows that lead to a reduction of the forecasted number of earthquakes. Although the model relies on several unknown parameters, it is the first physics based model submitted to the CSEP Japan test center and has the potential to be tuned for short-term earthquake forecasts.

  11. A semi-empirical analysis of strong-motion peaks in terms of seismic source, propagation path, and local site conditions

    NASA Astrophysics Data System (ADS)

    Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.

    1992-09-01

    A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.

  12. Numerical modeling of the 2017 active seismic infrasound balloon experiment

    NASA Astrophysics Data System (ADS)

    Brissaud, Q.; Komjathy, A.; Garcia, R.; Cutts, J. A.; Pauken, M.; Krishnamoorthy, S.; Mimoun, D.; Jackson, J. M.; Lai, V. H.; Kedar, S.; Levillain, E.

    2017-12-01

    We have developed a numerical tool to propagate acoustic and gravity waves in a coupled solid-fluid medium with topography. It is a hybrid method between a continuous Galerkin and a discontinuous Galerkin method that accounts for non-linear atmospheric waves, visco-elastic waves and topography. We apply this method to a recent experiment that took place in the Nevada desert to study acoustic waves from seismic events. This experiment, developed by JPL and its partners, wants to demonstrate the viability of a new approach to probe seismic-induced acoustic waves from a balloon platform. To the best of our knowledge, this could be the only way, for planetary missions, to perform tomography when one faces challenging surface conditions, with high pressure and temperature (e.g. Venus), and thus when it is impossible to use conventional electronics routinely employed on Earth. To fully demonstrate the effectiveness of such a technique one should also be able to reconstruct the observed signals from numerical modeling. To model the seismic hammer experiment and the subsequent acoustic wave propagation, we rely on a subsurface seismic model constructed from the seismometers measurements during the 2017 Nevada experiment and an atmospheric model built from meteorological data. The source is considered as a Gaussian point source located at the surface. Comparison between the numerical modeling and the experimental data could help future mission designs and provide great insights into the planet's interior structure.

  13. Icequake Tremors During Glacier Calving (Invited)

    NASA Astrophysics Data System (ADS)

    Walter, F.; O'Neel, S.; Bassis, J. N.; Fricker, H. A.; Pfeffer, W. T.

    2009-12-01

    Calving poses the largest uncertainty in the prediction of sea-level rise in response to global climate changes. A physically-based calving law has yet to be successfully implemented into ice-sheet models in order to adequately describe the mass loss of tidewater glaciers and ice shelves. Observations from a variety of glacial environments are needed in order to develop a theoretical framework for glacier calving. To this end, several recent investigations on glacier calving have involved the recording of seismic waves. In this context, the study of icequakes has been of high value, as it allows for detecting and monitoring of calving activity. However, there are unanswered fundamental questions concerning source aspects of calving-related seismic activity, such as focal depths of icequakes preceding and accompanying calving events, failure mechanisms and the role of fracturing and crevasse formation upstream from the glacier terminus. Icequake sources associated with opening of surface crevasses are well understood. As glacier ice is often homogeneous these waveforms are relatively simple and can be modeled using the moment tensor representation of a seismic point source. Calving-related seismicity, on the other hand, is more complex, and occurs near the terminus of a glacier, which is often highly heterogeneous due to pervasive crevassing. The signals last up to several minutes or even hours and exhibit both low-frequency (1-3Hz) as well as high-frequency (10-20Hz) energy or tremor-like waveforms. These characteristics can be explained by finite source properties, such as connecting and migrating fractures and repeated slip across contact planes between two bodies of ice. In this presentation we discuss sources of calving-related seismicity by comparing seismic calving records from several different glacial settings. We consider icequakes recorded during tidewater calving at Columbia Glacier, Alaska, during lake calving on Gornergletscher, Switzerland, and during ice shelf calving in Antarctica. The similarities and differences in seismic signatures of these different calving settings provide valuable insights and will be helpful in the theoretical treatment of glacier calving.

  14. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  15. Application of crowd-sourced data to multi-scale evolutionary exposure and vulnerability models

    NASA Astrophysics Data System (ADS)

    Pittore, Massimiliano

    2016-04-01

    Seismic exposure, defined as the assets (population, buildings, infrastructure) exposed to earthquake hazard and susceptible to damage, is a critical -but often neglected- component of seismic risk assessment. This partly stems from the burden associated with the compilation of a useful and reliable model over wide spatial areas. While detailed engineering data have still to be collected in order to constrain exposure and vulnerability models, the availability of increasingly large crowd-sourced datasets (e. g. OpenStreetMap) opens up the exciting possibility to generate incrementally evolving models. Integrating crowd-sourced and authoritative data using statistical learning methodologies can reduce models uncertainties and also provide additional drive and motivation to volunteered geoinformation collection. A case study in Central Asia will be presented and discussed.

  16. Seismic Hazard analysis of Adjaria Region in Georgia

    NASA Astrophysics Data System (ADS)

    Jorjiashvili, Nato; Elashvili, Mikheil

    2014-05-01

    The most commonly used approach to determining seismic-design loads for engineering projects is probabilistic seismic-hazard analysis (PSHA). The primary output from a PSHA is a hazard curve showing the variation of a selected ground-motion parameter, such as peak ground acceleration (PGA) or spectral acceleration (SA), against the annual frequency of exceedance (or its reciprocal, return period). The design value is the ground-motion level that corresponds to a preselected design return period. For many engineering projects, such as standard buildings and typical bridges, the seismic loading is taken from the appropriate seismic-design code, the basis of which is usually a PSHA. For more important engineering projects— where the consequences of failure are more serious, such as dams and chemical plants—it is more usual to obtain the seismic-design loads from a site-specific PSHA, in general, using much longer return periods than those governing code based design. Calculation of Probabilistic Seismic Hazard was performed using Software CRISIS2007 by Ordaz, M., Aguilar, A., and Arboleda, J., Instituto de Ingeniería, UNAM, Mexico. CRISIS implements a classical probabilistic seismic hazard methodology where seismic sources can be modelled as points, lines and areas. In the case of area sources, the software offers an integration procedure that takes advantage of a triangulation algorithm used for seismic source discretization. This solution improves calculation efficiency while maintaining a reliable description of source geometry and seismicity. Additionally, supplementary filters (e.g. fix a sitesource distance that excludes from calculation sources at great distance) allow the program to balance precision and efficiency during hazard calculation. Earthquake temporal occurrence is assumed to follow a Poisson process, and the code facilitates two types of MFDs: a truncated exponential Gutenberg-Richter [1944] magnitude distribution and a characteristic magnitude distribution [Youngs and Coppersmith, 1985]. Notably, the software can deal with uncertainty in the seismicity input parameters such as maximum magnitude value. CRISIS offers a set of built-in GMPEs, as well as the possibility of defining new ones by providing information in a tabular format. Our study shows that in case of Ajaristkali HPP study area, significant contribution to Seismic Hazard comes from local sources with quite low Mmax values, thus these two attenuation lows give us quite different PGA and SA values.

  17. Source mechanisms of a collapsing solution mine cavity

    NASA Astrophysics Data System (ADS)

    Lennart Kinscher, Jannes; Cesca, Simone; Bernard, Pascal; Contrucci, Isabelle; Mangeney, Anne; Piguet, Jack Pierre; Bigarre, Pascal

    2016-04-01

    The development and collapse of a ~200 m wide salt solution mining cavity was seismically monitored in the Lorraine basin in northeastern France. Seismic monitoring and other geophysical in situ measurements were part of a large multi-parameter research project founded by the research "group for the impact and safety of underground works" (GISOS), whose database is being integrated in the EPOS platform (European Plate Observing System). The recorded microseismic events (~ 50,000 in total) show a swarm-like behaviour, with clustering sequences lasting from seconds to days, and distinct spatiotemporal migration. The majority of swarming signals are likely related to detachment and block breakage processes, occurring at the cavity roof. Body wave amplitude patterns indicate the presence of relatively stable source mechanisms, either associated with dip-slip and/or tensile faulting. However, short inter-event times, the high frequency geophone recordings, and the limited network station coverage often limits the application of classical source analysis techniques. In order to deal with these shortcomings, we examined the source mechanisms through different procedures including modelling of observed and synthetic waveforms and amplitude spectra of some well located events, as well as modelling of peak-to-peak amplitude ratios for most of the detected events. The latter approach was used to infer the average source mechanism of many swarming events at once by using a single three component station. To our knowledge this approach is applied here for the first time and represents an useful tool for source studies of seismic swarms and seismicity clusters. The results of the different methods are consistent and show that at least 50 % of the microseismic events have remarkably stable source mechanisms, associated with similarly oriented thrust faults, striking NW-SE and dipping around 35-55°. Consistent source mechanisms are probably related to the presence of a preferential direction of pre-existing fault structures. As an interesting by-product, we demonstrate, for the first time directly on seismic data that the source radiation pattern significantly controls the detection capability of a seismic station and network.

  18. Calibration of the R/V Marcus G. Langseth Seismic Array in shallow Cascadia waters using the Multi-Channel Streamer

    NASA Astrophysics Data System (ADS)

    Crone, T. J.; Tolstoy, M.; Carton, H. D.

    2013-12-01

    In the summer of 2012, two multi-channel seismic (MCS) experiments, Cascadia Open-Access Seismic Transects (COAST) and Ridge2Trench, were conducted in the offshore Cascadia region. An area of growing environmental concern with active source seismic experiments is the potential impact of the received sound on marine mammals, but data relating to this issue is limited. For these surveys sound level 'mitigation radii' are established for the protection of marine mammals, based on direct arrival modeling and previous calibration experiments. Propagation of sound from seismic arrays can be accurately modeled in deep-water environments, but in shallow and sloped environments the complexity of local geology and bathymetry can make it difficult to predict sound levels as a function of distance from the source array. One potential solution to this problem is to measure the received levels in real-time using the ship's streamer (Diebold et al., 2010), which would allow the dynamic determination of suitable mitigation radii. We analyzed R/V Langseth streamer data collected on the shelf and slope off the Washington coast during the COAST experiment to measure received levels in situ up to 8 km away from the ship. Our analysis shows that water depth and bathymetric features can affect received levels in shallow water environments. The establishment of dynamic mitigation radii based on local conditions may help maximize the safety of marine mammals while also maximizing the ability of scientists to conduct seismic research. With increasing scientific and societal focus on subduction zone environments, a better understanding of shallow water sound propagation is essential for allowing seismic exploration of these hazardous environments to continue. Diebold, J. M., M. Tolstoy, L. Doermann, S. Nooner, S. Webb, and T. J. Crone (2010) R/V Marcus G. Langseth Seismic Source: Modeling and Calibration. Geochemistry, Geophysics, Geosystems, 11, Q12012, doi:10.1029/2010GC003216.

  19. Developing seismogenic source models based on geologic fault data

    USGS Publications Warehouse

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the Euro-Mediterranean, http://www.share-eu.org/; EMME in the Middle East, http://www.emme-gem.org/) and global scale (e.g., GEM, http://www.globalquakemodel.org/; Anonymous 2008). To some extent, each of these efforts is still trying to resolve the level of optimal detail required for this type of compilation. The comparison we provide defines a common standard for consideration by the international community for future regional and global seismogenic source models by identifying the necessary parameters that capture the essence of geological fault data in order to characterize seismogenic sources. In addition, we inform potential users of differences in our usage of common geological/seismological terms to avoid inappropriate use of the data in our models and provide guidance to convert the data from one model to the other (for detailed instructions, see the electronic supplement to this article). Applying our recommendations will permit probabilistic seismic hazard assessment codes to run seamlessly using either seismogenic source input. The USGS and INGV database schema compare well at a first-level inspection. Both databases contain a set of fields representing generalized fault three-dimensional geometry and additional fields that capture the essence of past earthquake occurrences. Nevertheless, there are important differences. When we further analyze supposedly comparable fields, many are defined differently. These differences would cause anomalous results in hazard prediction if one assumes the values are similarly defined. The data, however, can be made fully compatible using simple transformations.

  20. Automated seismic waveform location using Multichannel Coherency Migration (MCM)-I. Theory

    NASA Astrophysics Data System (ADS)

    Shi, Peidong; Angus, Doug; Rost, Sebastian; Nowacki, Andy; Yuan, Sanyi

    2018-03-01

    With the proliferation of dense seismic networks sampling the full seismic wavefield, recorded seismic data volumes are getting bigger and automated analysis tools to locate seismic events are essential. Here, we propose a novel Multichannel Coherency Migration (MCM) method to locate earthquakes in continuous seismic data and reveal the location and origin time of seismic events directly from recorded waveforms. By continuously calculating the coherency between waveforms from different receiver pairs, MCM greatly expands the available information which can be used for event location. MCM does not require phase picking or phase identification, which allows fully automated waveform analysis. By migrating the coherency between waveforms, MCM leads to improved source energy focusing. We have tested and compared MCM to other migration-based methods in noise-free and noisy synthetic data. The tests and analysis show that MCM is noise resistant and can achieve more accurate results compared with other migration-based methods. MCM is able to suppress strong interference from other seismic sources occurring at a similar time and location. It can be used with arbitrary 3D velocity models and is able to obtain reasonable location results with smooth but inaccurate velocity models. MCM exhibits excellent location performance and can be easily parallelized giving it large potential to be developed as a real-time location method for very large datasets.

  1. Evaluation of seismic hazard at the northwestern part of Egypt

    NASA Astrophysics Data System (ADS)

    Ezzelarab, M.; Shokry, M. M. F.; Mohamed, A. M. E.; Helal, A. M. A.; Mohamed, Abuoelela A.; El-Hadidy, M. S.

    2016-01-01

    The objective of this study is to evaluate the seismic hazard at the northwestern Egypt using the probabilistic seismic hazard assessment approach. The Probabilistic approach was carried out based on a recent data set to take into account the historic seismicity and updated instrumental seismicity. A homogenous earthquake catalogue was compiled and a proposed seismic sources model was presented. The doubly-truncated exponential model was adopted for calculations of the recurrence parameters. Ground-motion prediction equations that recently recommended by experts and developed based upon earthquake data obtained from tectonic environments similar to those in and around the studied area were weighted and used for assessment of seismic hazard in the frame of logic tree approach. Considering a grid of 0.2° × 0.2° covering the study area, seismic hazard curves for every node were calculated. Hazard maps at bedrock conditions were produced for peak ground acceleration, in addition to six spectral periods (0.1, 0.2, 0.3, 1.0, 2.0 and 3.0 s) for return periods of 72, 475 and 2475 years. The unified hazard spectra of two selected rock sites at Alexandria and Mersa Matruh Cities were provided. Finally, the hazard curves were de-aggregated to determine the sources that contribute most of hazard level of 10% probability of exceedance in 50 years for the mentioned selected sites.

  2. Magnitude, moment, and measurement: The seismic mechanism controversy and its resolution.

    PubMed

    Miyake, Teru

    This paper examines the history of two related problems concerning earthquakes, and the way in which a theoretical advance was involved in their resolution. The first problem is the development of a physical, as opposed to empirical, scale for measuring the size of earthquakes. The second problem is that of understanding what happens at the source of an earthquake. There was a controversy about what the proper model for the seismic source mechanism is, which was finally resolved through advances in the theory of elastic dislocations. These two problems are linked, because the development of a physically-based magnitude scale requires an understanding of what goes on at the seismic source. I will show how the theoretical advances allowed seismologists to re-frame the questions they were trying to answer, so that the data they gathered could be brought to bear on the problem of seismic sources in new ways. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. How seismicity and shear stress-generated tilt can indicate imminent explosions on Tungurahua

    NASA Astrophysics Data System (ADS)

    Neuberg, Jurgen; Mothes, Patricia; Collinson, Amy; Marsden, Luke

    2017-04-01

    Seismic swarms and tilt measurement on active silicic volcanoes have been successfully used to assess their eruption potential. Swarms of low-frequency seismic events have been associated with brittle failure or stick-slip motion of magma during ascent and have been used to estimate qualitatively the magma ascent. Tilt signals are extremely sensitive indicators for volcano deformation and their interpretation includes shear stress as a generating source as well as inflation or deflation of a shallow magma reservoir. Here we use data sets from different tiltmeters deployed on Tungurahua volcano, Ecuador, and contrast the two source models for different locations and time intervals. We analyse a simultaneously recorded seismic data set and address the question of shear stress partitioning resulting in both the generation of tilt and low-frequency seismicity in critical phases prior to Vulcanion explosions.

  4. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  5. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  6. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE PAGES

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...

    2018-02-14

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  7. Comparison of Earthquake Damage Patterns and Shallow-Depth Vs Structure Across the Napa Valley, Inferred From Multichannel Analysis of Surface Waves (MASW) and Multichannel Analysis of Love Waves (MALW) Modeling of Basin-Wide Seismic Profiles

    NASA Astrophysics Data System (ADS)

    Chan, J. H.; Catchings, R.; Strayer, L. M.; Goldman, M.; Criley, C.; Sickler, R. R.; Boatwright, J.

    2017-12-01

    We conducted an active-source seismic investigation across the Napa Valley (Napa Valley Seismic Investigation-16) in September of 2016 consisting of two basin-wide seismic profiles; one profile was 20 km long and N-S-trending (338°), and the other 15 km long and E-W-trending (80°) (see Catchings et al., 2017). Data from the NVSI-16 seismic investigation were recorded using a total of 666 vertical- and horizontal-component seismographs, spaced 100 m apart on both seismic profiles. Seismic sources were generated by a total of 36 buried explosions spaced 1 km apart. The two seismic profiles intersected in downtown Napa, where a large number of buildings were red-tagged by the City following the 24 August 2014 Mw 6.0 South Napa earthquake. From the recorded Rayleigh and Love waves, we developed 2-Dimensional S-wave velocity models to depths of about 0.5 km using the multichannel analysis of surface waves (MASW) method. Our MASW (Rayleigh) and MALW (Love) models show two prominent low-velocity (Vs = 350 to 1300 m/s) sub-basins that were also previously identified from gravity studies (Langenheim et al., 2010). These basins trend N-W and also coincide with the locations of more than 1500 red- and yellow-tagged buildings within the City of Napa that were tagged after the 2014 South Napa earthquake. The observed correlation between low-Vs, deep basins, and the red-and yellow-tagged buildings in Napa suggests similar large-scale seismic investigations can be performed. These correlations provide insights into the likely locations of significant structural damage resulting from future earthquakes that occur adjacent to or within sedimentary basins.

  8. Seismo-volcano source localization with triaxial broad-band seismic array

    NASA Astrophysics Data System (ADS)

    Inza, L. A.; Mars, J. I.; Métaxian, J. P.; O'Brien, G. S.; Macedo, O.

    2011-10-01

    Seismo-volcano source localization is essential to improve our understanding of eruptive dynamics and of magmatic systems. The lack of clear seismic wave phases prohibits the use of classical location methods. Seismic antennas composed of one-component (1C) seismometers provide a good estimate of the backazimuth of the wavefield. The depth estimation, on the other hand, is difficult or impossible to determine. As in classical seismology, the use of three-component (3C) seismometers is now common in volcano studies. To determine the source location parameters (backazimuth and depth), we extend the 1C seismic antenna approach to 3Cs. This paper discusses a high-resolution location method using a 3C array survey (3C-MUSIC algorithm) with data from two seismic antennas installed on an andesitic volcano in Peru (Ubinas volcano). One of the main scientific questions related to the eruptive process of Ubinas volcano is the relationship between the magmatic explosions and long-period (LP) swarms. After introducing the 3C array theory, we evaluate the robustness of the location method on a full wavefield 3-D synthetic data set generated using a digital elevation model of Ubinas volcano and an homogeneous velocity model. Results show that the backazimuth determined using the 3C array has a smaller error than a 1C array. Only the 3C method allows the recovery of the source depths. Finally, we applied the 3C approach to two seismic events recorded in 2009. Crossing the estimated backazimuth and incidence angles, we find sources located 1000 ± 660 m and 3000 ± 730 m below the bottom of the active crater for the explosion and the LP event, respectively. Therefore, extending 1C arrays to 3C arrays in volcano monitoring allows a more accurate determination of the source epicentre and now an estimate for the depth.

  9. Resolving source mechanisms of microseismic swarms induced by solution mining

    NASA Astrophysics Data System (ADS)

    Kinscher, J.; Cesca, S.; Bernard, P.; Contrucci, I.; Mangeney, A.; Piguet, J. P.; Bigarré, P.

    2016-07-01

    In order to improve our understanding of hazardous underground cavities, the development and collapse of a ˜200 m wide salt solution mining cavity was seismically monitored in the Lorraine basin in northeastern France. The microseismic events show a swarm-like behaviour, with clustering sequences lasting from seconds to days, and distinct spatiotemporal migration. Observed microseismic signals are interpreted as the result of detachment and block breakage processes occurring at the cavity roof. Body wave amplitude patterns indicated the presence of relatively stable source mechanisms, either associated with dip-slip and/or tensile faulting. Signal overlaps during swarm activity due to short interevent times, the high-frequency geophone recordings and the limited network station coverage often limit the application of classical source analysis techniques. To overcome these shortcomings, we investigated the source mechanisms through different procedures including modelling of observed and synthetic waveforms and amplitude spectra of some well-located events, as well as modelling of peak-to-peak amplitude ratios for the majority of the detected events. We extended the latter approach to infer the average source mechanism of many swarming events at once, using multiple events recorded at a single three component station. This methodology is applied here for the first time and represents a useful tool for source studies of seismic swarms and seismicity clusters. The results obtained with different methods are consistent and indicate that the source mechanisms for at least 50 per cent of the microseismic events are remarkably stable, with a predominant thrust faulting regime with faults similarly oriented, striking NW-SE and dipping around 35°-55°. This dominance of consistent source mechanisms might be related to the presence of a preferential direction of pre-existing crack or fault structures. As an interesting byproduct, we demonstrate, for the first time directly on seismic data, that the source radiation pattern significantly controls the detection capability of a seismic station and network.

  10. Seismic multiplet response triggered by melt at Blood Falls, Taylor Glacier, Antarctica

    NASA Astrophysics Data System (ADS)

    Carmichael, Joshua D.; Pettit, Erin C.; Hoffman, Matt; Fountain, Andrew; Hallet, Bernard

    2012-09-01

    Meltwater input often triggers a seismic response from glaciers and ice sheets. It is difficult, however, to measure melt production on glaciers directly, while subglacial water storage is not directly observable. Therefore, we document temporal changes in seismicity from a dry-based polar glacier (Taylor Glacier, Antarctica) during a melt season using a synthesis of seismic observation and melt modeling. We record icequakes using a dense six-receiver network of three-component geophones and compare this with melt input generated from a calibrated surface energy balance model. In the absence of modeled surface melt, we find that seismicity is well-described by a diurnal signal composed of microseismic events in lake and glacial ice. During melt events, the diurnal signal is suppressed and seismicity is instead characterized by large glacial icequakes. We perform network-based correlation and clustering analyses of seismic record sections and determine that 18% of melt-season icequakes are repetitive (multiplets). The epicentral locations for these multiplets suggest that they are triggered by meltwater produced near a brine seep known as Blood Falls. Our observations of the correspondingp-wave first motions are consistent with volumetric source mechanisms. We suggest that surface melt enables a persistent pathway through this cold ice to an englacial fracture system that is responsible for brine release episodes from the Blood Falls seep. The scalar moments for these events suggest that the volumetric increase at the source region can be explained by melt input.

  11. Local SPTHA through tsunami inundation simulations: a test case for two coastal critical infrastructures in the Mediterranean

    NASA Astrophysics Data System (ADS)

    Volpe, M.; Selva, J.; Tonini, R.; Romano, F.; Lorito, S.; Brizuela, B.; Argyroudis, S.; Salzano, E.; Piatanesi, A.

    2016-12-01

    Seismic Probabilistic Tsunami Hazard Analysis (SPTHA) is a methodology to assess the exceedance probability for different thresholds of tsunami hazard intensity, at a specific site or region in a given time period, due to a seismic source. A large amount of high-resolution inundation simulations is typically required for taking into account the full variability of potential seismic sources and their slip distributions. Starting from regional SPTHA offshore results, the computational cost can be reduced by considering for inundation calculations only a subset of `important' scenarios. We here use a method based on an event tree for the treatment of the seismic source aleatory variability; a cluster analysis on the offshore results to define the important sources; epistemic uncertainty treatment through an ensemble modeling approach. We consider two target sites in the Mediterranean (Milazzo, Italy, and Thessaloniki, Greece) where coastal (non nuclear) critical infrastructures (CIs) are located. After performing a regional SPTHA covering the whole Mediterranean, for each target site, few hundreds of representative scenarios are filtered out of all the potential seismic sources and the tsunami inundation is explicitly modeled, obtaining a site-specific SPTHA, with a complete characterization of the tsunami hazard in terms of flow depth and velocity time histories. Moreover, we also explore the variability of SPTHA at the target site accounting for coseismic deformation (i.e. uplift or subsidence) due to near field sources located in very shallow water. The results are suitable and will be applied for subsequent multi-hazard risk analysis for the CIs. These applications have been developed in the framework of the Italian Flagship Project RITMARE, EC FP7 ASTARTE (Grant agreement 603839) and STREST (Grant agreement 603389) projects, and of the INGV-DPC Agreement.

  12. Towards a first design of a Newtonian-noise cancellation system for Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Coughlin, M.; Mukund, N.; Harms, J.; Driggers, J.; Adhikari, R.; Mitra, S.

    2016-12-01

    Newtonian gravitational noise from seismic fields is predicted to be a limiting noise source at low frequency for second generation gravitational-wave detectors. Mitigation of this noise will be achieved by Wiener filtering using arrays of seismometers deployed in the vicinity of all test masses. In this work, we present optimized configurations of seismometer arrays using a variety of simplified models of the seismic field based on seismic observations at LIGO Hanford. The model that best fits the seismic measurements leads to noise reduction limited predominantly by seismometer self-noise. A first simplified design of seismic arrays for Newtonian-noise cancellation at the LIGO sites is presented, which suggests that it will be sufficient to monitor surface displacement inside the buildings.

  13. Fast in-memory elastic full-waveform inversion using consumer-grade GPUs

    NASA Astrophysics Data System (ADS)

    Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge

    2017-04-01

    Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.

  14. Open Source Tools for Seismicity Analysis

    NASA Astrophysics Data System (ADS)

    Powers, P.

    2010-12-01

    The spatio-temporal analysis of seismicity plays an important role in earthquake forecasting and is integral to research on earthquake interactions and triggering. For instance, the third version of the Uniform California Earthquake Rupture Forecast (UCERF), currently under development, will use Epidemic Type Aftershock Sequences (ETAS) as a model for earthquake triggering. UCERF will be a "living" model and therefore requires robust, tested, and well-documented ETAS algorithms to ensure transparency and reproducibility. Likewise, as earthquake aftershock sequences unfold, real-time access to high quality hypocenter data makes it possible to monitor the temporal variability of statistical properties such as the parameters of the Omori Law and the Gutenberg Richter b-value. Such statistical properties are valuable as they provide a measure of how much a particular sequence deviates from expected behavior and can be used when assigning probabilities of aftershock occurrence. To address these demands and provide public access to standard methods employed in statistical seismology, we present well-documented, open-source JavaScript and Java software libraries for the on- and off-line analysis of seismicity. The Javascript classes facilitate web-based asynchronous access to earthquake catalog data and provide a framework for in-browser display, analysis, and manipulation of catalog statistics; implementations of this framework will be made available on the USGS Earthquake Hazards website. The Java classes, in addition to providing tools for seismicity analysis, provide tools for modeling seismicity and generating synthetic catalogs. These tools are extensible and will be released as part of the open-source OpenSHA Commons library.

  15. Sensitivity of the coastal tsunami simulation to the complexity of the 2011 Tohoku earthquake source model

    NASA Astrophysics Data System (ADS)

    Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène

    2016-04-01

    The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).

  16. Could the collapse of a massive speleothem be the record of a large paleoearthquake?

    NASA Astrophysics Data System (ADS)

    Valentini, Alessandro; Pace, Bruno; Vasta, Marcello; Ferranti, Luigi; Colella, Abner; Vassallo, Maurizio

    2016-04-01

    Earthquake forecast and seismic hazard models are generally based on historical and instrumental seismicity. However, in regions characterized by moderate strain rates and by strong earthquakes with recurrence longer than the time span covered by historical catalogues, different approaches are desirable to provide an independent test of seismologically-based models. We used non-conventional methods, such as the so-called "Fragile Geological Features", and in particular cave speleothems, for assessing and improving existing paleoseismological databases and seismic hazard models. In this work we present a detailed study of a massive speleothem found collapsed in the Cola Cave (Abruzzo region, Central Apennines, Italy) that could be considered the record of a large paleoearthquake. Radiometric dating and geotechnical measurements are carried out to characterize the collapse time and the mechanical properties of speleothem. We performed theoretical and numerical modelling in order to estimate the values of the horizontal ground acceleration required to failure the speleothems. In particular we used a finite element method (FEM), with the SAP200 software, starting from the detailed geometry of the speleothem and its mechanical properties. We used several individual seismogenic source geometries and four different ground motion prediction equations to calculate the possible response spectra. We carried out also a seismic noise survey to understand and quantify any ground motion amplification phenomenon. The results suggest two faults located in the Fucino area as the most probable causative sources of the cave speleothem collapses, recorded ~4-5 ka ago, with a Mw=6.8 ± 0.2. Our approach contributes to assess the existence of past earthquakes integrating the classical paleoseismological trenches techniques, and to attribute the retrieved event to geometrically-defined individual seismogenic sources, which represents a key contribution to improve fault-based seismic hazard models.

  17. Vital Signs: Seismology of Icy Ocean Worlds

    NASA Astrophysics Data System (ADS)

    Vance, Steven D.; Kedar, Sharon; Panning, Mark P.; Stähler, Simon C.; Bills, Bruce G.; Lorenz, Ralph D.; Huang, Hsin-Hua; Pike, W. T.; Castillo, Julie C.; Lognonné, Philippe; Tsai, Victor C.; Rhoden, Alyssa R.

    2018-01-01

    Ice-covered ocean worlds possess diverse energy sources and associated mechanisms that are capable of driving significant seismic activity, but to date no measurements of their seismic activity have been obtained. Such investigations could reveal the transport properties and radial structures, with possibilities for locating and characterizing trapped liquids that may host life and yielding critical constraints on redox fluxes and thus on habitability. Modeling efforts have examined seismic sources from tectonic fracturing and impacts. Here, we describe other possible seismic sources, their associations with science questions constraining habitability, and the feasibility of implementing such investigations. We argue, by analogy with the Moon, that detectable seismic activity should occur frequently on tidally flexed ocean worlds. Their ices fracture more easily than rocks and dissipate more tidal energy than the <1 GW of the Moon and Mars. Icy ocean worlds also should create less thermal noise due to their greater distance and consequently smaller diurnal temperature variations. They also lack substantial atmospheres (except in the case of Titan) that would create additional noise. Thus, seismic experiments could be less complex and less susceptible to noise than prior or planned planetary seismology investigations of the Moon or Mars.

  18. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    NASA Astrophysics Data System (ADS)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including hanging wall and directivity effects) within modern ground motion prediction equations, can have an influence on the seismic hazard at a site. Yet we also illustrate the conditions under which these effects may be partially tempered when considering the full uncertainty in rupture behaviour within the fault system. The third challenge is the development of efficient means for representing both aleatory and epistemic uncertainties from active fault models in PSHA. In implementing state-of-the-art seismic hazard models into OpenQuake, such as those recently undertaken in California and Japan, new modeling techniques are needed that redefine how we treat interdependence of ruptures within the model (such as mutual exclusivity), and the propagation of uncertainties emerging from geology. Finally, we illustrate how OpenQuake, and GEM's additional toolkits for model preparation, can be applied to address long-standing issues in active fault modeling in PSHA. These include constraining the seismogenic coupling of a fault and the partitioning of seismic moment between the active fault surfaces and the surrounding seismogenic crust. We illustrate some of the possible roles that geodesy can play in the process, but highlight where this may introduce new uncertainties and potential biases into the seismic hazard process, and how these can be addressed.

  19. SIG-VISA: Signal-based Vertically Integrated Seismic Monitoring

    NASA Astrophysics Data System (ADS)

    Moore, D.; Mayeda, K. M.; Myers, S. C.; Russell, S.

    2013-12-01

    Traditional seismic monitoring systems rely on discrete detections produced by station processing software; however, while such detections may constitute a useful summary of station activity, they discard large amounts of information present in the original recorded signal. We present SIG-VISA (Signal-based Vertically Integrated Seismic Analysis), a system for seismic monitoring through Bayesian inference on seismic signals. By directly modeling the recorded signal, our approach incorporates additional information unavailable to detection-based methods, enabling higher sensitivity and more accurate localization using techniques such as waveform matching. SIG-VISA's Bayesian forward model of seismic signal envelopes includes physically-derived models of travel times and source characteristics as well as Gaussian process (kriging) statistical models of signal properties that combine interpolation of historical data with extrapolation of learned physical trends. Applying Bayesian inference, we evaluate the model on earthquakes as well as the 2009 DPRK test event, demonstrating a waveform matching effect as part of the probabilistic inference, along with results on event localization and sensitivity. In particular, we demonstrate increased sensitivity from signal-based modeling, in which the SIGVISA signal model finds statistical evidence for arrivals even at stations for which the IMS station processing failed to register any detection.

  20. A rapid estimation of near field tsunami run-up

    USGS Publications Warehouse

    Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie

    2015-01-01

    Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.

  1. Seismogenic zones and attenuation laws for probabilistic seismic hazard assessment in low deformation area =

    NASA Astrophysics Data System (ADS)

    Le Goff, Boris

    Seismic Hazard Analysis (PSHA), rather than the subjective methodologies that are currently used. This study focuses particularly in the definition of the seismic sources, through the seismotectonic zoning, and the determination of historical earthquake location. An important step in the Probabilistic Seismic Hazard Analysis consists in defining the seismic source model. Such a model expresses the association of the seismicity characteristics with the tectonically-active geological structures evidenced by seismotectonic studies. Given that most of the faults, in low seismic regions, are not characterized well enough, the source models are generally defined as areal zones, delimited with finite boundary polygons, within which the seismicity and the geological features are deemed homogeneous (e.g., focal depth, seismicity rate). Besides the lack of data (short period of instrumental seismicity), such a method generates different problems for regions with low seismic activity: 1) a large sensitivity of resulting hazard maps to the location of zone boundaries, while these boundaries are set by expert decisions; 2) the zoning cannot represent any variability or structural complexity in seismic parameters; 3) the seismicity rate is distributed throughout the zone and the location of the determinant information used for its calculation is lost. We investigate an alternative approach to model the seismotectonic zoning, with three main objectives: 1) obtaining a reproducible method that 2) preserves the information on the sources and extent of the uncertainties, so as to allow to propagate them (through Ground Motion Prediction Equations on to the hazard maps), and that 3) redefines the seismic source concept to debrief our knowledge on the seismogenic structures and the clustering. To do so, the Bayesian methods are favored. First, a generative model with two zones, differentiated by two different surface activity rates, was developed, creating synthetic catalogs drawn from a Poisson distribution as occurrence model, a truncated Gutenberg-Richter law as magnitudefrequency relationship and a uniform spatial distribution. The inference of this model permits to assess the minimum number of data, nmin, required in an earthquake catalog to recover the activity rates of both zones and the limit between them, with some level of accuracy. In this Bayesian model, the earthquake locations are essential. Consequently, these data have to be obtained with the best accuracy possible. The main difficulty is to reduce the location uncertainty of historical earthquakes. We propose to use the method of Bakun and Wentworth (1997) to reestimate the epicentral region of these events. This method uses directly the intensity data points rather than the isoseismal lines, set up by experts. The significant advantage in directly using individual intensity observations is that the procedures are explicit and hence the results are reproducible. The results of such a method provide an estimation of the epicentral region with levels of confidence appropriated for the number of intensity data points used. As example, we applied this methodology to the 1909 Benavente event, because of its controversial location and the particularly shape of its isoseismal lines. A new location of the 1909 Benavente event is presented in this study and the epicentral region of this event is expressed with confidence levels related to the number of intensity data points. This epicentral region is improved by the development of a new intensity-distance attenuation law, appropriate for the Portugal mainland. This law is the first one in Portugal mainland developed as a function of the magnitude (M) rather than the subjective epicentral intensity. From the logarithmic regression of each event, we define the equation form of the attenuation law. We obtained the following attenuation law: I= -1.9438 ln(D)+4.1Mw-9.5763 for 4.4 ≤ Mw ≤ 6.2 Using these attenuation laws, we reached to a magnitude estimation of the 1909 Benavente event that is in good agreement with the instrumental one. The epicentral region estimation was also improved with a tightening of the confidence level contours and a minimum of rms[MI] coming closer to the epicenter estimation of Karnik (1969). Finally, this two zone model will be a reference in the comparison with other models, which will incorporate other available data. Nevertheless, future improvements are needed to obtain a seismotectonic zoning. We emphasize that such an approach is reproducible once priors and data sets are chosen. Indeed, the objective is to incorporate expert opinions as priors, and avoid using expert decisions. Instead, the products will be directly the result of the inference, when only one model is considered, or the result of a combination of models in the Bayesian sense.

  2. The influence of maximum magnitude on seismic-hazard estimates in the Central and Eastern United States

    USGS Publications Warehouse

    Mueller, C.S.

    2010-01-01

    I analyze the sensitivity of seismic-hazard estimates in the central and eastern United States (CEUS) to maximum magnitude (mmax) by exercising the U.S. Geological Survey (USGS) probabilistic hazard model with several mmax alternatives. Seismicity-based sources control the hazard in most of the CEUS, but data seldom provide an objective basis for estimating mmax. The USGS uses preferred mmax values of moment magnitude 7.0 and 7.5 for the CEUS craton and extended margin, respectively, derived from data in stable continental regions worldwide. Other approaches, for example analysis of local seismicity or judgment about a source's seismogenic potential, often lead to much smaller mmax. Alternative models span the mmax ranges from the 1980s Electric Power Research Institute/Seismicity Owners Group (EPRI/SOG) analysis. Results are presented as haz-ard ratios relative to the USGS national seismic hazard maps. One alternative model specifies mmax equal to moment magnitude 5.0 and 5.5 for the craton and margin, respectively, similar to EPRI/SOG for some sources. For 2% probability of exceedance in 50 years (about 0.0004 annual probability), the strong mmax truncation produces hazard ratios equal to 0.35-0.60 for 0.2-sec spectral acceleration, and 0.15-0.35 for 1.0-sec spectral acceleration. Hazard-controlling earthquakes interact with mmax in complex ways. There is a relatively weak dependence on probability level: hazardratios increase 0-15% for 0.002 annual exceedance probability and decrease 5-25% for 0.00001 annual exceedance probability. Although differences at some sites are tempered when faults are added, mmax clearly accounts for some of the discrepancies that are seen in comparisons between USGS-based and EPRI/SOG-based hazard results.

  3. Seismic Waves, 4th order accurate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-08-16

    SW4 is a program for simulating seismic wave propagation on parallel computers. SW4 colves the seismic wave equations in Cartesian corrdinates. It is therefore appropriate for regional simulations, where the curvature of the earth can be neglected. SW4 implements a free surface boundary condition on a realistic topography, absorbing super-grid conditions on the far-field boundaries, and a kinematic source model consisting of point force and/or point moment tensor source terms. SW4 supports a fully 3-D heterogeneous material model that can be specified in several formats. SW4 can output synthetic seismograms in an ASCII test format, or in the SAC finarymore » format. It can also present simulation information as GMT scripts, whixh can be used to create annotated maps. Furthermore, SW4 can output the solution as well as the material model along 2-D grid planes.« less

  4. Source-Type Inversion of the September 03, 2017 DPRK Nuclear Test

    NASA Astrophysics Data System (ADS)

    Dreger, D. S.; Ichinose, G.; Wang, T.

    2017-12-01

    On September 3, 2017, the DPRK announced a nuclear test at their Punggye-ri site. This explosion registered a mb 6.3, and was well recorded by global and regional seismic networks. We apply the source-type inversion method (e.g. Ford et al., 2012; Nayak and Dreger, 2015), and the MDJ2 seismic velocity model (Ford et al., 2009) to invert low frequency (0.02 to 0.05 Hz) complete three-component waveforms, and first-motion polarities to map the goodness of fit in source-type space. We have used waveform data from the New China Digital Seismic Network (BJT, HIA, MDJ), Korean Seismic Network (TJN), and the Global Seismograph Network (INCN, MAJO). From this analysis, the event discriminates as an explosion. For a pure explosion model, we find a scalar seismic moment of 5.77e+16 Nm (Mw 5.1), however this model fails to fit the large Love waves registered on the transverse components. The best fitting complete solution finds a total moment of 8.90e+16 Nm (Mw 5.2) that is decomposed as 53% isotropic, 40% double-couple, and 7% CLVD, although the range of isotropic moment from the source-type analysis indicates that it could be as high as 60-80%. The isotropic moment in the source-type inversion is 4.75e16 Nm (Mw 5.05). Assuming elastic moduli from model MDJ2 the explosion cavity radius is approximately 51m, and the yield estimated using Denny and Johnson (1991) is 246kt. Approximately 8.5 minutes after the blast a second seismic event was registered, which is best characterized as a vertically closing horizontal crack, perhaps representing the partial collapse of the blast cavity, and/or a service tunnel. The total moment of the collapse is 3.34e+16 Nm (Mw 4.95). The volumetric moment of the collapse is 1.91e+16 Nm, approximately 1/3 to 1/2 of the explosive moment. German TerraSAR-X observations of deformation (Wang et al., 2017) reveal large radial outward motions consistent with expected deformation for an explosive source, but lack significant vertical motions above the shot point. Forward elastic half-space modeling of the static deformation field indicates that the combination of the explosion and collapse explains the observed deformation to first order. We will present these results as well as a two-step inversion of the explosion in an attempt to better resolve the nature of the non-isotropic radiation of the event.

  5. Improving the seismic small-scale modelling by comparison with numerical methods

    NASA Astrophysics Data System (ADS)

    Pageot, Damien; Leparoux, Donatienne; Le Feuvre, Mathieu; Durand, Olivier; Côte, Philippe; Capdeville, Yann

    2017-10-01

    The potential of experimental seismic modelling at reduced scale provides an intermediate step between numerical tests and geophysical campaigns on field sites. Recent technologies such as laser interferometers offer the opportunity to get data without any coupling effects. This kind of device is used in the Mesures Ultrasonores Sans Contact (MUSC) measurement bench for which an automated support system makes possible to generate multisource and multireceivers seismic data at laboratory scale. Experimental seismic modelling would become a great tool providing a value-added stage in the imaging process validation if (1) the experimental measurement chain is perfectly mastered, and thus if the experimental data are perfectly reproducible with a numerical tool, as well as if (2) the effective source is reproducible along the measurement setup. These aspects for a quantitative validation concerning devices with piezoelectrical sources and a laser interferometer have not been yet quantitatively studied in published studies. Thus, as a new stage for the experimental modelling approach, these two key issues are tackled in the proposed paper in order to precisely define the quality of the experimental small-scale data provided by the bench MUSC, which are available in the scientific community. These two steps of quantitative validation are dealt apart any imaging techniques in order to offer the opportunity to geophysicists who want to use such data (delivered as free data) of precisely knowing their quality before testing any imaging technique. First, in order to overcome the 2-D-3-D correction usually done in seismic processing when comparing 2-D numerical data with 3-D experimental measurement, we quantitatively refined the comparison between numerical and experimental data by generating accurate experimental line sources, avoiding the necessity of geometrical spreading correction for 3-D point-source data. The comparison with 2-D and 3-D numerical modelling is based on the Spectral Element Method. The approach shows the relevance of building a line source by sampling several source points, except the boundaries effects on later arrival times. Indeed, the experimental results highlight the amplitude feature and the delay equal to π/4 provided by a line source in the same manner than numerical data. In opposite, the 2-D corrections applied on 3-D data showed discrepancies which are higher on experimental data than on numerical ones due to the source wavelet shape and interferences between different arrivals. The experimental results from the approach proposed here show that discrepancies are avoided, especially for the reflected echoes. Concerning the second point aiming to assess the experimental reproducibility of the source, correlation coefficients of recording from a repeated source impact on a homogeneous model are calculated. The quality of the results, that is, higher than 0.98, allow to calculate a mean source wavelet by inversion of a mean data set. Results obtained on a more realistic model simulating clays on limestones, confirmed the reproducibility of the source impact.

  6. Change Detection via Cross-Borehole and VSP Seismic Surveys for the Source Physics Experiments (SPE) at the Nevada National Security Site (NNSS)

    NASA Astrophysics Data System (ADS)

    Knox, H. A.; Abbott, R. E.; Bonal, N. D.; Aldridge, D. F.; Preston, L. A.; Ober, C.

    2012-12-01

    In support of the Source Physics Experiment (SPE) at the Nevada National Security Site (NNSS), we have conducted two cross-borehole seismic experiments in the Climax Stock. The first experiment was conducted prior to the third shot in this multi-detonation program using two available boreholes and the shot hole, while the second experiment was conducted after the shot using four of the available boreholes. The first study focused on developing a well-characterized 2D pre-explosion Vp model including two VSPs and a seismic refraction survey, as well as quantifying baseline waveform similarity at reoccupied sites. This was accomplished by recording both "sparker" and accelerated weight drop sources on a hydrophone string and surface geophones. In total more than 18,500 unique source-receiver pairs were acquired during this testing. In the second experiment, we reacquired aproximately 8,800 source-receiver pairs and performed a cross-line survey allowing for a 3D post-explosion Vp model. The data acquired from the reoccupied sites was processed using cross-correlation methods and change detection methodologies, including comparison of the tomographic images. The survey design and subsequent processing provided an opportunity to investigate seismic wave propagation through damaged rock. We also performed full waveform forward modelling for a granitic body hosting a perched aquifer. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  7. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  8. A numerical and experimental investigation on seismic anisotropy of Finero Peridotite, Ivrea-Verbano Zone, northern Italy

    NASA Astrophysics Data System (ADS)

    Zhong, Xin; Frehner, Marcel; Zappone, Alba; Kunze, Karsten

    2014-05-01

    We present a combined experimental and numerical study on Finero Peridotite to investigate the major factors creating its seismic anisotropy. We extrapolate the ultrasonic seismic wave velocity measured in a hydrostatic pressure vessel to 0 MPa and 250 MPa confining pressure to compare with numerical simulations at atmospheric pressure and to restore the velocity at in-situ lower crustal conditions, respectively. A linear relation between confining pressure and seismic velocity above 80 MPa reveals the intrinsic mechanical property of the bulk rock without the interference of cracks. To visualize the crystallographic preferred orientation (CPO) we use the electron backscatter diffraction (EBSD) method and create crystallographic orientation maps and pole figures. The first also reveals the shape preferred orientation (SPO). We found that very weak CPO but significant SPO exist in most of the peridotite. The Voigt and Reuss bounds as well as the Hill average (VRH) are calculated from EBSD data to visualize seismic velocity and to calculate anisotropy in the form of velocity pole figures. We perform finite element (FE) simulations of wave propagation on the EBSD crystallographic orientation maps to calculate the effective wave velocity at different propagation angles, hence estimate the anisotropy numerically. In fracture-free models the FE simulation results agree well with the Hill average. In one case of a sample containing fractures the FE simulation yields similar minimal velocity as the laboratory measurement, which lies outside the VR bounds. This is a warning that care has to be taken when using VRH averages in fractured rocks. All three velocity estimates (hydrostatic pressure vessel, VRH average, and FE simulation) result in equally weak seismic anisotropy. This is mainly the consequence of weak CPO. Although SPO is significantly stronger it has minor influence on anisotropy. Hydrous minerals influence the seismic anisotropy only when their modal composition is large enough to allow waves to propagate preferentially through them. Unlike hornblende, phlogopite is not proven to be a major source for the seismic anisotropy due to its small modal composition. Seismic velocity is also influenced by the source frequency distribution. A lower-frequency source in the FE simulations results in lower effective velocity regardless of sample orientation. The frequency spectrum of the propagating wave is modified from source to receiver due to scattering at the mineral grains, thus leading to effective negative attenuation factors peaked at around 1-3 MHz depending on the source spectrum. However, compared with other factors, such as CPO, SPO, fractures, or hydrous mineral phases, the effect of the source frequency distribution is minor, but may be influential when extrapolated to seismic frequencies (Hz-kHz). This study provides a comprehensive method combining laboratory measurements, EBSD data, and numerical simulations to estimate seismic anisotropy. Future work may focus on modeling the influence of different pore fluids or more complex fracture geometries on seismic velocity and anisotropy. Acknowledgements This work was supported by the Swiss National Science Foundation (project UPseis, 200021_143319).

  9. Analysis of induced seismicity at The Geysers geothermal field, California

    NASA Astrophysics Data System (ADS)

    Emolo, A.; Maercklin, N.; Matrullo, E.; Orefice, A.; Amoroso, O.; Convertito, V.; Sharma, N.; Zollo, A.

    2012-12-01

    Fluid injection, steam extraction, and reservoir stimulation in geothermal systems lead to induced seismicity. While in rare cases induced events may be large enough to pose a hazard, on the other hand the microseismicity provides information on the extent and the space-time varying properties of the reservoir. Therefore, microseismic monitoring is important, both for mitigation of unwanted effects of industrial operations and for continuous assessment of reservoir conditions. Here we analyze induced seismicity at The Geysers geothermal field in California, a vapor-dominated field with the top of the main steam reservoir some 1-3 km below the surface. Commercial exploitation began in the 1960s, and the seismicity increased with increasing field development. We focus our analyses on induced seismicity recorded between August 2007 and October 2011. Our calibrated waveform database contains some 15000 events with magnitudes between 1.0 and 4.5 and recorded by the LBNL Geysers/Calpine surface seismic network. We associated all data with events from the NCEDC earthquake catalog and re-picked first arrival times. Using selected events with at least 20 high-quality P-wave picks, we determined a minimum 1-D velocity model using VELEST. A well-constrained P-velocity model shows a sharp velocity increase at 1-2 km depth (from 3 to 5 km/s) and then a gradient-like trend down to about 5 km depth, where velocities reach values of 6-7 km/s. The station corrections show coherent, relatively high, positive travel time delays in the NW zone, thus indicating a strong lateral variation of the P-wave velocities. We determined an average Vp-to-Vs ratio of 1.67, which is consistent with estimates from other authors for the same time period. The events have been relocated in the new model using a non-linear probabilistic methods. The seismicity appears spatially diffused in a 15x10 km2 area elongated in NW-SE direction, and earthquake depths range between 0 and 6 km. As in previous seismicity studies of this geothermal field, we find that events occurring in the NW sector are on average deeper than in the SE area. To infer the present stress regime, we computed focal mechanisms of a large event data set with M > 2, using P-wave first-arrival polarities. The found fault-plane solutions show a dominant strike-slip and normal faulting mechanisms, with P and T axes coherently oriented with expected regional stress field for the area. We also determined the main seismic source parameters from a multi-step, iterative inversion of P-wave displacement spectra, assuming a four-parameters spectral model and a constant-Q attenuation mechanism. In particular, we computed seismic moments, source radii and stress drops. We observe a self-similar scaling of source parameters in the whole investigated magnitude range, with a nearly constant stress-drop of 20 and 120 MPa depending on the use of Brune (1970) or Madariaga (1976)'s source model respectively.

  10. A generalization of the double-corner-frequency source spectral model and its use in the SCEC BBP validation exercise

    USGS Publications Warehouse

    Boore, David M.; Di Alessandro, Carola; Abrahamson, Norman A.

    2014-01-01

    The stochastic method of simulating ground motions requires the specification of the shape and scaling with magnitude of the source spectrum. The spectral models commonly used are either single-corner-frequency or double-corner-frequency models, but the latter have no flexibility to vary the high-frequency spectral levels for a specified seismic moment. Two generalized double-corner-frequency ω2 source spectral models are introduced, one in which two spectra are multiplied together, and another where they are added. Both models have a low-frequency dependence controlled by the seismic moment, and a high-frequency spectral level controlled by the seismic moment and a stress parameter. A wide range of spectral shapes can be obtained from these generalized spectral models, which makes them suitable for inversions of data to obtain spectral models that can be used in ground-motion simulations in situations where adequate data are not available for purely empirical determinations of ground motions, as in stable continental regions. As an example of the use of the generalized source spectral models, data from up to 40 stations from seven events, plus response spectra at two distances and two magnitudes from recent ground-motion prediction equations, were inverted to obtain the parameters controlling the spectral shapes, as well as a finite-fault factor that is used in point-source, stochastic-method simulations of ground motion. The fits to the data are comparable to or even better than those from finite-fault simulations, even for sites close to large earthquakes.

  11. Low-frequency seismic events in a wider volcanological context

    NASA Astrophysics Data System (ADS)

    Neuberg, J. W.; Collombet, M.

    2006-12-01

    Low-frequency seismic events have been in the centre of attention for several years, particularly on volcanoes with highly viscous magmas. The ultimate aim is to detect changes in volcanic activity by identifying changes in the seismic behaviour in order to forecast an eruption, or in case of an ongoing eruption, forecast the short and longterm behaviour of the volcanic system. A major boost in recent years arose through several attempts of multi-parameter volcanic monitoring and modelling programs, which allowed multi-disciplinary groups of volcanologists to interpret seismic signals together with, e.g. ground deformation, stress field analysis and petrological information. This talk will give several examples of such multi-disciplinary projects, focussing on the joint modelling of seismic source processes for low-frequency events together with advanced magma flow models, and the signs of magma movement in the deformation and stress field at the surface.

  12. From Magma Fracture to a Seismic Magma Flow Meter

    NASA Astrophysics Data System (ADS)

    Neuberg, J. W.

    2007-12-01

    Seismic swarms of low-frequency events occur during periods of enhanced volcanic activity and have been related to the flow of magma at depth. Often they precede a dome collapse on volcanoes like Soufriere Hills, Montserrat, or Mt St Helens. This contribution is based on the conceptual model of magma rupture as a trigger mechanism. Several source mechanisms and radiation patterns at the focus of a single event are discussed. We investigate the accelerating event rate and seismic amplitudes during one swarm, as well as over a time period of several swarms. The seismic slip vector will be linked to magma flow parameters resulting in estimates of magma flux for a variety of flow models such as plug flow, parabolic- or friction controlled flow. In this way we try to relate conceptual models to quantitative estimations which could lead to estimations of magma flux at depth from seismic low-frequency signals.

  13. Frequency-depth dependent spherical reflection response from the sea surface - A transmission experiment

    NASA Astrophysics Data System (ADS)

    Wehner, D.; Landrø, M.; Amundsen, L.; Westerdahl, H.

    2018-05-01

    In academia and the industry, there is increasing interest in generating and recording low seismic frequencies, which lead to better data quality, deeper signal penetration and can be important for full-waveform inversion. The common marine seismic source in acquisition is the air gun which is towed behind a vessel. The frequency content of the signal produced by the air gun mainly depends on its source depth as there are two effects which are presumed to counteract each other. First, there is the oscillating air bubble generated by the air gun which leads to more low frequencies for shallow source depths. Secondly, there is the interference of the downgoing wave with the first reflection from the sea surface, referred to as the ghost, which leads to more low frequencies for deeper source depths. It is still under debate whether it is beneficial to place the source shallow or deep to generate the strongest signal for frequencies below 5 Hz. Therefore, the ghost effect is studied in more detail by measuring the transmission at the water-air interface. We conduct experiments in a water tank where a small-volume seismic source is fired at different depths below the water surface to investigate how the ghost varies with frequency and depth. The signal from the seismic source is recorded with hydrophones inside water and air during the test to estimate the transmitted signal through the interface. In a second test, we perform experiments with an acoustic source located in air which is fired at different elevations above the water surface. The source in air is a starter gun and the signals are again recorded in water and air. The measured data indicates an increasing transmission of the signal through the water-air interface when the source is closer to the water surface which leads to a decreasing reflection for sources close to the surface. The measured results are compared with modeled data and the existing theory. The observed increase in transmission for shallow source depths could be explained by the theory of a spherical wave front striking the interface instead of assuming a plane wave front. The difference can be important for frequencies below 1 Hz. The results suggest that deploying a few sources very shallow during marine seismic acquisition could be beneficial for these very low frequencies. In addition, the effect of a spherical wave front might be considered for modeling far field signatures of seismic sources for frequencies below 1 Hz.

  14. Complex earthquake rupture and local tsunamis

    USGS Publications Warehouse

    Geist, E.L.

    2002-01-01

    In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.

  15. Insights into asthenospheric anisotropy and deformation in Mainland China

    NASA Astrophysics Data System (ADS)

    Zhu, Tao

    2018-03-01

    Seismic anisotropy can provide direct constraints on asthenospheric deformation which also can be induced by the inherent mantle flow within our planet. Mantle flow calculations thus have been an effective tool to probe asthenospheric anisotropy. To explore the source of seismic anisotropy, asthenospheric deformation and the effects of mantle flow on seismic anisotropy in Mainland China, mantle flow models driven by plate motion (plate-driven) and by a combination of plate motion and mantle density heterogeneity (plate-density-driven) are used to predict the fast polarization direction of shear wave splitting. Our results indicate that: (1) plate-driven or plate-density-driven mantle flow significantly affects the predicted fast polarization direction when compared with simple asthenospheric flow commonly used in interpreting the asthenospheric source of seismic anisotropy, and thus new insights are presented; (2) plate-driven flow controls the fast polarization direction while thermal mantle flow affects asthenospheric deformation rate and local deformation direction significantly; (3) asthenospheric flow is an assignable contributor to seismic anisotropy, and the asthenosphere is undergoing low, large or moderate shear deformation controlled by the strain model, the flow plane/flow direction model or both in most regions of central and eastern China; and (4) the asthenosphere is under more rapid extension deformation in eastern China than in western China.

  16. Quantitative modeling of reservoir-triggered seismicity

    NASA Astrophysics Data System (ADS)

    Hainzl, S.; Catalli, F.; Dahm, T.; Heinicke, J.; Woith, H.

    2017-12-01

    Reservoir-triggered seismicity might occur as the response to the crustal stress caused by the poroelastic response to the weight of the water volume and fluid diffusion. Several cases of high correlations have been found in the past decades. However, crustal stresses might be altered by many other processes such as continuous tectonic stressing and coseismic stress changes. Because reservoir-triggered stresses decay quickly with distance, even tidal or rainfall-triggered stresses might be of similar size at depth. To account for simultaneous stress sources in a physically meaningful way, we apply a seismicity model based on calculated stress changes in the crust and laboratory-derived friction laws. Based on the observed seismicity, the model parameters can be determined by maximum likelihood method. The model leads to quantitative predictions of the variations of seismicity rate in space and time which can be used for hypothesis testing and forecasting. For case studies in Talala (India), Val d'Agri (Italy) and Novy Kostel (Czech Republic), we show the comparison of predicted and observed seismicity, demonstrating the potential and limitations of the approach.

  17. Gulf of Mexico and Caribbean Sea Data and Model Base Report

    DTIC Science & Technology

    1979-07-01

    The source levels and spectral characteristics of merchant ships, drill rigs, and seismic profiling sources are reason- ably well known. Lacking...better data, fishing vessels are assumed to be 10 dB quieter than merchar• ships; production platforms are assumed to be similar to drill rigs, corrected...scope of the problem presented by production platforms, mobile drill rigs, and seismic profilers. 5. Impact on Exercise Planning Offshore oil industry

  18. Toward seismic source imaging using seismo-ionospheric data

    NASA Astrophysics Data System (ADS)

    Rolland, L.; Larmat, C. S.; Mikesell, D.; Sladen, A.; Khelfi, K.; Astafyeva, E.; Lognonne, P. H.

    2014-12-01

    The worldwide coverage offered by global navigation space systems (GNSS) such as GPS, GLONASS or Galileo allows seismological measurements of a new kind. GNSS-derived total electron content (TEC) measurements can be especially useful to image seismically active zones that are not covered by conventional instruments. For instance, it has been shown that the Japanese dense GPS network GEONET was able to record images of the ionosphere response to the initial coseismic sea-surface motion induced by the great Mw 9.0 2011 Tohoku-Oki earthquake less than 10 minutes after the rupture initiation (Astafyeva et al., 2013). But earthquakes of lower magnitude, down to about 6.5 would also induce measurable ionospheric perturbations, when GNSS stations are located less than 250 km away from the epicenter. In order to make use of these new data, ionospheric seismology needs to develop accurate forward models so that we can invert for quantitative seismic sources parameters. We will present our current understanding of the coupling mechanisms between the solid Earth, the ocean, the atmosphere and the ionosphere. We will also present the state-of-the-art in the modeling of coseismic ionospheric disturbances using acoustic ray theory and a new 3D modeling method based on the Spectral Element Method (SEM). This latter numerical tool will allow us to incorporate lateral variations in the solid Earth properties, the bathymetry and the atmosphere as well as realistic seismic source parameters. Furthermore, seismo-acoustic waves propagate in the atmosphere at a much slower speed (from 0.3 to ~1 km/s) than seismic waves propagate in the solid Earth. We are exploring the application of back-projection and time-reversal methods to TEC observations in order to retrieve the time and space characteristics of the acoustic emission in the seismic source area. We will first show modeling and inversion results with synthetic data. Finally, we will illustrate the imaging capability of our approach with, among other possible examples, the 2011 Mw 9.0 Tohoku-Oki earthquake, Japan, the 2012 Mw 7.8 Haida Gwaii earthquake, Canada and the 2011 Mw 7.1 Van earthquake, Eastern Turkey.

  19. Diffusion approximation with polarization and resonance effects for the modelling of seismic waves in strongly scattering small-scale media

    NASA Astrophysics Data System (ADS)

    Margerin, Ludovic

    2013-01-01

    This paper presents an analytical study of the multiple scattering of seismic waves by a collection of randomly distributed point scatterers. The theory assumes that the energy envelopes are smooth, but does not require perturbations to be small, thereby allowing the modelling of strong, resonant scattering. The correlation tensor of seismic coda waves recorded at a three-component sensor is decomposed into a sum of eigenmodes of the elastodynamic multiple scattering (Bethe-Salpeter) equation. For a general moment tensor excitation, a total number of four modes is necessary to describe the transport of seismic waves polarization. Their spatio-temporal dependence is given in closed analytical form. Two additional modes transporting exclusively shear polarizations may be excited by antisymmetric moment tensor sources only. The general solution converges towards an equipartition mixture of diffusing P and S waves which allows the retrieval of the local Green's function from coda waves. The equipartition time is obtained analytically and the impact of absorption on Green's function reconstruction is discussed. The process of depolarization of multiply scattered waves and the resulting loss of information is illustrated for various seismic sources. It is shown that coda waves may be used to characterize the source mechanism up to lapse times of the order of a few mean free times only. In the case of resonant scatterers, a formula for the diffusivity of seismic waves incorporating the effect of energy entrapment inside the scatterers is obtained. Application of the theory to high-contrast media demonstrates that coda waves are more sensitive to slow rather than fast velocity anomalies by several orders of magnitude. Resonant scattering appears as an attractive physical phenomenon to explain the small values of the diffusion constant of seismic waves reported in volcanic areas.

  20. Investigation of structural heterogeneity at the SPE site using combined P–wave travel times and Rg phase velocities

    DOE PAGES

    Rowe, Charlotte A.; Patton, Howard J.

    2015-10-01

    Here, we present analyses of the 2D seismic structure beneath Source Physics Experiments (SPE) geophone lines that extended radially at 100 m spacing from 100 to 2000 m from the source borehole. With seismic sources at only one end of the geophone lines, standard refraction profiling methods cannot resolve seismic velocity structures unambiguously. In previous work, we demonstrated overall agreement between body-wave refraction modeling and Rg dispersion curves for the least complex of the five lines. A more detailed inspection supports a 2D reinterpretation of the structure. We obtained Rg phase velocity measurements in both the time and frequency domains,more » then used iterative adjustment of the initial 1D body-wave model to predict Rg dispersion curves to fit the observed values. Our method applied to the most topographically severe of the geophone lines is supplemented with a 2D ray-tracing approach, whose application to P-wave arrivals supports the Rg analysis. In addition, midline sources will allow us to refine our characterization in future work.« less

  1. Inverting near-surface models from virtual-source gathers (SM Division Outstanding ECS Award Lecture)

    NASA Astrophysics Data System (ADS)

    Ruigrok, Elmer; Vossen, Caron; Paulssen, Hanneke

    2017-04-01

    The Groningen gas field is a massive natural gas accumulation in the north-east of the Netherlands. Decades of production have led to significant compaction of the reservoir rock. The (differential) compaction is thought to have reactivated existing faults and to be the main driver of induced seismicity. The potential damage at the surface is largely affected by the state of the near surface. Thin and soft sedimentary layers can lead to large amplifications. By measuring the wavefield at different depth levels, near-surface properties can directly be estimated from the recordings. Seismicity in the Groningen area is monitored primarily with an array of vertical arrays. In the nineties a network of 8 boreholes was deployed. Since 2015, this network has been expanded with 70 new boreholes. Each new borehole consists of an accelerometer at the surface and four downhole geophones with a vertical spacing of 50 m. We apply seismic interferometry to local seismicity, for each borehole individually. Doing so, we obtain the responses as if there were virtual sources at the lowest geophones and receivers at the other depth levels. From the retrieved direct waves and reflections, we invert for P- & S- velocity and Q models. We discuss different implementations of seismic interferometry and the subsequent inversion. The inverted near-surface properties are used to improve both the source location and the hazard assessment.

  2. New seismic hazard maps for Puerto Rico and the U.S. Virgin Islands

    USGS Publications Warehouse

    Mueller, C.; Frankel, A.; Petersen, M.; Leyendecker, E.

    2010-01-01

    The probabilistic methodology developed by the U.S. Geological Survey is applied to a new seismic hazard assessment for Puerto Rico and the U.S. Virgin Islands. Modeled seismic sources include gridded historical seismicity, subduction-interface and strike-slip faults with known slip rates, and two broad zones of crustal extension with seismicity rates constrained by GPS geodesy. We use attenuation relations from western North American and worldwide data, as well as a Caribbean-specific relation. Results are presented as maps of peak ground acceleration and 0.2- and 1.0-second spectral response acceleration for 2% and 10% probabilities of exceedance in 50 years (return periods of about 2,500 and 500 years, respectively). This paper describes the hazard model and maps that were balloted by the Building Seismic Safety Council and recommended for the 2003 NEHRP Provisions and the 2006 International Building Code. ?? 2010, Earthquake Engineering Research Institute.

  3. Determining the sensitivity of the amplitude source location (ASL) method through active seismic sources: An example from Te Maari Volcano, New Zealand

    NASA Astrophysics Data System (ADS)

    Walsh, Braden; Jolly, Arthur; Procter, Jonathan

    2017-04-01

    Using active seismic sources on Tongariro Volcano, New Zealand, the amplitude source location (ASL) method is calibrated and optimized through a series of sensitivity tests. By applying a geologic medium velocity of 1500 m/s and an attenuation value of Q=60 for surface waves along with amplification factors computed from regional earthquakes, the ASL produced location discrepancies larger than 1.0 km horizontally and up to 0.5 km in depth. Through the use of sensitivity tests on input parameters, we show that velocity and attenuation models have moderate to strong influences on the location results, but can be easily constrained. Changes in locations are accommodated through either lateral or depth movements. Station corrections (amplification factors) and station geometry strongly affect the ASL locations laterally, horizontally and in depth. Calibrating the amplification factors through the exploitation of the active seismic source events reduced location errors for the sources by up to 50%.

  4. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  5. Active source monitoring at the Wenchuan fault zone: coseismic velocity change associated with aftershock event and its implication

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Ge, Hongkui; Wang, Baoshan; Hu, Jiupeng; Yuan, Songyong; Qiao, Sen

    2014-12-01

    With the improvement of seismic observation system, more and more observations indicate that earthquakes may cause seismic velocity change. However, the amplitude and spatial distribution of the velocity variation remains a controversial issue. Recent active source monitoring carried out adjacent to Wenchuan Fault Scientific Drilling (WFSD) revealed unambiguous coseismic velocity change associated with a local M s5.5 earthquake. Here, we carry out forward modeling using two-dimensional spectral element method to further investigate the amplitude and spatial distribution of observed velocity change. The model is well constrained by results from seismic reflection and WFSD coring. Our model strongly suggests that the observed coseismic velocity change is localized within the fault zone with width of ~120 m rather than dynamic strong ground shaking. And a velocity decrease of ~2.0 % within the fault zone is required to fit the observed travel time delay distribution, which coincides with rock mechanical experiment and theoretical modeling.

  6. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.

  7. Research on the spatial analysis method of seismic hazard for island

    NASA Astrophysics Data System (ADS)

    Jia, Jing; Jiang, Jitong; Zheng, Qiuhong; Gao, Huiying

    2017-05-01

    Seismic hazard analysis(SHA) is a key component of earthquake disaster prevention field for island engineering, whose result could provide parameters for seismic design microscopically and also is the requisite work for the island conservation planning’s earthquake and comprehensive disaster prevention planning macroscopically, in the exploitation and construction process of both inhabited and uninhabited islands. The existing seismic hazard analysis methods are compared in their application, and their application and limitation for island is analysed. Then a specialized spatial analysis method of seismic hazard for island (SAMSHI) is given to support the further related work of earthquake disaster prevention planning, based on spatial analysis tools in GIS and fuzzy comprehensive evaluation model. The basic spatial database of SAMSHI includes faults data, historical earthquake record data, geological data and Bouguer gravity anomalies data, which are the data sources for the 11 indices of the fuzzy comprehensive evaluation model, and these indices are calculated by the spatial analysis model constructed in ArcGIS’s Model Builder platform.

  8. Teamwork tools and activities within the hazard component of the Global Earthquake Model

    NASA Astrophysics Data System (ADS)

    Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.

    2013-05-01

    The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.

  9. Period-dependent source rupture behavior of the 2011 Tohoku earthquake estimated by multi period-band Bayesian waveform inversion

    NASA Astrophysics Data System (ADS)

    Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.

    2014-12-01

    Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep rupture off Fukushima at 90-135 s. The dominant-period difference of the seismic-wave radiation between two deep ruptures off Miyagi may result from the mechanism that small-scale heterogeneities on the fault are removed by the first rupture. This difference can be also interpreted by the concept of multi-scale dynamic rupture (Ide & Aochi, 2005).

  10. On the difficulties of detecting PP precursors

    NASA Astrophysics Data System (ADS)

    Lessing, Stephan; Thomas, Christine; Saki, Morvarid; Schmerr, Nicholas; Vanacore, Elizabeth

    2015-06-01

    The PP precursors are seismic waves that form from underside reflections of P waves off discontinuities in the upper mantle transition zone (MTZ). These seismic phases are used to map discontinuity topography, sharpness, and impedance contrasts; the resulting structural variations are then often interpreted as evidence for temperature and/or mineralogy variations within the mantle. The PP precursors as well as other seismic phases have been used to establish the global presence of seismic discontinuities at 410 and 660 km depth. Intriguingly, in more than 80 per cent of PP precursor observations the seismic wave amplitudes are significantly weaker than the amplitudes predicted by seismic reference models. Even more perplexing is the observation that 1-5 per cent of all earthquakes (which are 20-25 per cent of earthquakes with clear PP waveforms) do not show any evidence for the PP precursors from the discontinuities even in the presence of well-developed PP waveforms. Non-detections are found in six different data sets consisting of tens to hundreds of events. We use synthetic modelling to examine a suite of factors that could be responsible for the absence of the PP precursors. The take-off angles for PP and the precursors differ by only 1.2-1.5°; thus source-related complexity would affect PP and the precursors. A PP wave attenuated in the upper mantle would increase the relative amplitude of the PP precursors. Attenuation within the transition zone could reduce precursor amplitudes, but this would be a regional phenomenon restricted to particular source receiver geometries. We also find little evidence for deviations from the theoretical travel path of seismic rays expected for scattered arrivals. Factors that have a strong influence include the stacking procedures used in seismic array techniques in the presence of large, interfering phases, the presence of topography on the discontinuities on the order of tens of kilometres, and 3-D lateral heterogeneity in the velocity and density changes with depth across the transition zone. We also compare the observed precursors' amplitudes with seismic models from calculations of phase equilibria and find that a seismic velocity model derived from a pyrolite composition reproduces the data better than the currently available 1-D earth models. This largely owes to the pyrolite models producing a stronger minimum in the reflection coefficient across the epicentral distances where the reduction in amplitudes of the PP precursors is observed. To suppress the precursors entirely in a small subset of earthquakes, other effects, such as localized discontinuity topography and seismic signal processing effects are required in addition to the changed velocity model.

  11. TEM PSHA2015 Reliability Assessment

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Wang, Y. J.; Chan, C. H.; Ma, K. F.

    2016-12-01

    The Taiwan Earthquake Model (TEM) developed a new probabilistic seismic hazard analysis (PSHA) for determining the probability of exceedance (PoE) of ground motion over a specified period in Taiwan. To investigate the adequacy of the seismic source parameters adopted in the 2015 PSHA of the TEM (TEM PSHA2015), we conducted several tests of the seismic source models. The observed maximal peak ground acceleration (PGA) of the ML > 4.0 mainshocks in the 23-year data period of 1993-2015 were used to test the predicted PGA of PSHA from the areal and subduction zone sources with the time-independent Poisson assumption. This comparison excluded the observations from 1999 Chi-Chi earthquake, as this was the only earthquake associated with the identified active fault in this past 23 years. We used tornado diagrams to analyze the sensitivities of these source parameters to the ground motion values of the PSHA. This study showed that the predicted PGA for a 63% PoE in the 23-year period corresponded to the empirical PGA and the predicted numbers of PGA exceedances to a threshold value 0.1g close to the observed numbers, confirming the parameter applicability for the areal and subduction zone sources. We adopted the disaggregation analysis from a hazard map to determine the contribution of the individual seismic sources to hazard for six metropolitan cities in Taiwan. The sensitivity tests of the seismogenic structure parameters indicated that the slip rate and maximum magnitude are dominant factors for the TEM PSHA2015. For densely populated faults in SW Taiwan, maximum magnitude is more sensitive than the slip rate, giving the concern on the possible multiple fault segments rupture with larger magnitude in this area, which was not yet considered in TEM PSHA2015. The source category disaggregation also suggested that special attention is necessary for subduction zone earthquakes for long-period shaking seismic hazards in Northern Taiwan.

  12. The analysis and interpretation of very-long-period seismic signals on volcanoes

    NASA Astrophysics Data System (ADS)

    Sindija, Dinko; Neuberg, Jurgen; Smith, Patrick

    2017-04-01

    The study of very long period (VLP) seismic signals became possible with the widespread use of broadband instruments. VLP seismic signals are caused by transients of pressure in the volcanic edifice and have periods ranging from several seconds to several minutes. For the VLP events recorded in March 2012 and 2014 at Soufriere Hills Volcano, Montserrat, we model the ground displacement using several source time functions: a step function using Richards growth equation, Küpper wavelet, and a damped sine wave to which an instrument response is then applied. This way we get a synthetic velocity seismogram which is directly comparable to the data. After the full vector field of ground displacement is determined, we model the source mechanism to determine the relationship between the source mechanism and the observed VLP waveforms. Emphasis of the research is on how different VLP waveforms are related to the volcano environment and the instrumentation used and on the processing steps in this low frequency band to get most out of broadband instruments.

  13. Shallow Refraction and Rg Analysis at the Source Physics Experiment Site

    NASA Astrophysics Data System (ADS)

    Rowe, C. A.; Carmichael, J. D.; Patton, H. J.; Snelson, C. M.; Coblentz, D. D.; Larmat, C. S.; Yang, X.

    2014-12-01

    We present analyses of the two-dimensional (2D) seismic structure beneath Source Physics Experiments (SPE) geophone lines that extended 100 to 2000 m from the source borehole with 100 m spacing. With seismic sources provided only at one end of the geophone lines, standard refraction profiling methods are unable to resolve the seismic velocity structures unambiguously. In previous work we have shown overall agreement between body-wave refraction modeling and Rg dispersion curves for the least complex of the five lines, Line 2, leading us to offer a simplified1D model for this line. A more detailed inspection of Line 2 supports a 2D re-interpretation of the structure on this line. We observe variation along the length of the line, as evidenced by abrupt and consistent changes in the behavior of surface waves at higher frequencies. We interpret this as a manifestation of significant material or structural heterogeneity in the shallowest strata. This interpretation is consistent with P-wave and Rg attenuation observations. Planned additional sources, both at the distal ends of the profiles and intermittently within their lengths, will provide significant enhancement to our ability to resolve this complicated shallow structure.

  14. Computing the Sensitivity Kernels for 2.5-D Seismic Waveform Inversion in Heterogeneous, Anisotropic Media

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, S. A.

    2011-10-01

    2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.

  15. Centroid moment tensor catalogue using a 3-D continental scale Earth model: Application to earthquakes in Papua New Guinea and the Solomon Islands

    NASA Astrophysics Data System (ADS)

    Hejrani, Babak; Tkalčić, Hrvoje; Fichtner, Andreas

    2017-07-01

    Although both earthquake mechanism and 3-D Earth structure contribute to the seismic wavefield, the latter is usually assumed to be layered in source studies, which may limit the quality of the source estimate. To overcome this limitation, we implement a method that takes advantage of a 3-D heterogeneous Earth model, recently developed for the Australasian region. We calculate centroid moment tensors (CMTs) for earthquakes in Papua New Guinea (PNG) and the Solomon Islands. Our method is based on a library of Green's functions for each source-station pair for selected Geoscience Australia and Global Seismic Network stations in the region, and distributed on a 3-D grid covering the seismicity down to 50 km depth. For the calculation of Green's functions, we utilize a spectral-element method for the solution of the seismic wave equation. Seismic moment tensors were calculated using least squares inversion, and the 3-D location of the centroid is found by grid search. Through several synthetic tests, we confirm a trade-off between the location and the correct input moment tensor components when using a 1-D Earth model to invert synthetics produced in a 3-D heterogeneous Earth. Our CMT catalogue for PNG in comparison to the global CMT shows a meaningful increase in the double-couple percentage (up to 70%). Another significant difference that we observe is in the mechanism of events with depth shallower then 15 km and Mw < 6, which contributes to accurate tectonic interpretation of the region.

  16. Detailed investigation of Long-Period activity at Campi Flegrei by Convolutive Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Capuano, P.; De Lauro, E.; De Martino, S.; Falanga, M.

    2016-04-01

    This work is devoted to the analysis of seismic signals continuously recorded at Campi Flegrei Caldera (Italy) during the entire year 2006. The radiation pattern associated with the Long-Period energy release is investigated. We adopt an innovative Independent Component Analysis algorithm for convolutive seismic series adapted and improved to give automatic procedures for detecting seismic events often buried in the high-level ambient noise. The extracted waveforms characterized by an improved signal-to-noise ratio allows the recognition of Long-Period precursors, evidencing that the seismic activity accompanying the mini-uplift crisis (in 2006), which climaxed in the three days from 26-28 October, had already started at the beginning of the month of October and lasted until mid of November. Hence, a more complete seismic catalog is then provided which can be used to properly quantify the seismic energy release. To better ground our results, we first check the robustness of the method by comparing it with other blind source separation methods based on higher order statistics; secondly, we reconstruct the radiation patterns of the extracted Long-Period events in order to link the individuated signals directly to the sources. We take advantage from Convolutive Independent Component Analysis that provides basic signals along the three directions of motion so that a direct polarization analysis can be performed with no other filtering procedures. We show that the extracted signals are mainly composed of P waves with radial polarization pointing to the seismic source of the main LP swarm, i.e. a small area in the Solfatara, also in the case of the small-events, that both precede and follow the main activity. From a dynamical point of view, they can be described by two degrees of freedom, indicating a low-level of complexity associated with the vibrations from a superficial hydrothermal system. Our results allow us to move towards a full description of the complexity of the source, which can be used, by means of the small-intensity precursors, for hazard-model development and forecast-model testing, showing an illustrative example of the applicability of the CICA method to regions with low seismicity in high ambient noise.

  17. The 26 December 2004 tsunami source estimated from satellite radar altimetry and seismic waves

    NASA Technical Reports Server (NTRS)

    Song, Tony Y.; Ji, Chen; Fu, L. -L.; Zlotnicki, Victor; Shum, C. K.; Yi, Yuchan; Hjorleifsdottir, Vala

    2005-01-01

    The 26 December 2004 Indian Ocean tsunami was the first earthquake tsunami of its magnitude to occur since the advent of both digital seismometry and satellite radar altimetry. Both have independently recorded the event from different physical aspects. The seismic data has then been used to estimate the earthquake fault parameters, and a three-dimensional ocean-general-circulation-model (OGCM) coupled with the fault information has been used to simulate the satellite-observed tsunami waves. Here we show that these two datasets consistently provide the tsunami source using independent methodologies of seismic waveform inversion and ocean modeling. Cross-examining the two independent results confirms that the slip function is the most important condition controlling the tsunami strength, while the geometry and the rupture velocity of the tectonic plane determine the spatial patterns of the tsunami.

  18. Method for inverting reflection trace data from 3-D and 4-D seismic surveys and identifying subsurface fluid and pathways in and among hydrocarbon reservoirs based on impedance models

    DOEpatents

    He, W.; Anderson, R.N.

    1998-08-25

    A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management. 20 figs.

  19. Method for inverting reflection trace data from 3-D and 4-D seismic surveys and identifying subsurface fluid and pathways in and among hydrocarbon reservoirs based on impedance models

    DOEpatents

    He, Wei; Anderson, Roger N.

    1998-01-01

    A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management.

  20. Array seismological investigation of the South Atlantic 'Superplume'

    NASA Astrophysics Data System (ADS)

    Hempel, Stefanie; Gassmöller, Rene; Thomas, Christine

    2015-04-01

    We apply the axisymmetric, spherical Earth spectral elements code AxiSEM to model seismic compressional waves which sample complex `superplume' structures in the lower mantle. High-resolution array seismological stacking techniques are evaluated regarding their capability to resolve large-scale high-density low-velocity bodies including interior structure such as inner upwellings, high density lenses, ultra-low velocity zones (ULVZs), neighboring remnant slabs and adjacent small-scale uprisings. Synthetic seismograms are also computed and processed for models of the Earth resulting from geodynamic modelling of the South Atlantic mantle including plate reconstruction. We discuss the interference and suppression of the resulting seismic signals and implications for a seismic data study in terms of visibility of the South Atlantic `superplume' structure. This knowledge is used to process, invert and interpret our data set of seismic sources from the Andes and the South Sandwich Islands detected at seismic arrays spanning from Ethiopia over Cameroon to South Africa mapping the South Atlantic `superplume' structure including its interior structure. In order too present the model of the South Atlantic `superplume' structure that best fits the seismic data set, we iteratively compute synthetic seismograms while adjusting the model according to the dependencies found in the parameter study.

  1. Seismic structure off the Kii Peninsula, Japan, deduced from passive- and active-source seismographic data

    NASA Astrophysics Data System (ADS)

    Yamamoto, Yojiro; Takahashi, Tsutomu; Kaiho, Yuka; Obana, Koichiro; Nakanishi, Ayako; Kodaira, Shuichi; Kaneda, Yoshiyuki

    2017-03-01

    We conduct seismic tomography to model subsurface seismicity between 2010 and 2012 and structural heterogeneity off the Kii Peninsula, southwestern Japan, and to investigate their relationships with segmentation of the Nankai and Tonankai seismogenic zones of the Nankai Trough. In order to constrain both the shallow and deep structure of the offshore seismogenic segments, we use both active- and passive-source data recorded by both ocean-bottom seismometers and land seismic stations. The relocated microearthquakes indicate a lack of seismic activity in the Tonankai seismogenic segment off Kumano, whereas there was active intraslab seismicity in the Kii Channel area of the Nankai seismogenic segment. Based on comparisons among the distribution of seismicity, age, and spreading rate of the subducting Philippine Sea plate, and the slip-deficit distribution, we conclude that seismicity in the subducting slab under the Kii Channel region nucleated from structures in the Philippine Sea slab that pre-date subduction and that fluids released by dehydration are related to decreased interplate coupling of these intraslab earthquakes. Our velocity model clearly shows the areal extent of two key structures reported in previous 2-D active-source surveys: a high-velocity zone beneath Cape Shionomisaki and a subducted seamount off Cape Muroto, both of which are roughly circular and of 15-20 km radius. The epicenters of the 1944 Tonankai and 1946 Nankai earthquakes are near the edge of the high-velocity body beneath Cape Shionomisaki, suggesting that this anomalous structure is related to the nucleation of these two earthquakes. We identify several other high- and low-velocity zones immediately above the plate boundary in the Tonankai and Nankai seismogenic segments. In comparison with the slip-deficit model, some of the low-velocity zones appear to correspond to an area of strong coupling. Our observations suggest that, unlike the Japan Trench subduction zone, in our study area there is not a simple correspondence between areas of large coseismic slip or strong interplate coupling and areas of high velocity in the overriding plate.

  2. Sources of high frequency seismic noise: insights from a dense network of ~250 stations in northern Alsace (France)

    NASA Astrophysics Data System (ADS)

    Vergne, Jerome; Blachet, Antoine; Lehujeur, Maximilien

    2015-04-01

    Monitoring local or regional seismic activity requires stations having a low level of background seismic noise at frequencies higher than few tenths of Hertz. Network operators are well aware that the seismic quality of a site depends on several aspects, among them its geological setting and the proximity of roads, railways, industries or trees. Often, the impact of each noise source is only qualitatively known which precludes estimating the quality of potential future sites before they are tested or installed. Here, we want to take advantage of a very dense temporary network deployed in Northern Alsace (France) to assess the effect of various kinds of potential sources on the level of seismic noise observed in the frequency range 0.2-50 Hz. In September 2014, more than 250 seismic stations (FairfieldNodal@ Zland nodes with 10Hz vertical geophone) have been installed every 1.5 km over a ~25km diameter disc centred on the deep geothermal sites of Soultz-sous-Forêts and Rittershoffen. This region exhibits variable degrees of human imprints from quite remote areas to sectors with high traffic roads and big villages. It also encompasses both the deep sedimentary basin of the Rhine graben and the piedmont of the Vosges massif with exposed bedrock. For each site we processed the continuous data to estimate probability density functions of the power spectral densities. At frequencies higher than 1 Hz most sites show a clear temporal modulation of seismic noise related to human activity with the well-known variations between day and night and between weekdays and weekends. Moreover we observe a clear evolution of the spatial distribution of seismic noise levels with frequency. Basically, between 0.5 and 4 Hz the geological setting modulates the level of seismic noise. At higher frequencies, the amplitude of seismic noise appears mostly related to the distance to nearby roads. Based on road maps and traffic estimation, a forward approach is performed to model the induced seismic noise. Effects of other types of seismic sources, such as industries or wind, are also observed but usually have a more limited spatial extension and a specific signature in the spectrograms.

  3. Ground-motion signature of dynamic ruptures on rough faults

    NASA Astrophysics Data System (ADS)

    Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.

    2016-04-01

    Natural earthquakes occur on faults characterized by large-scale segmentation and small-scale roughness. This multi-scale geometrical complexity controls the dynamic rupture process, and hence strongly affects the radiated seismic waves and near-field shaking. For a fault system with given segmentation, the question arises what are the conditions for producing large-magnitude multi-segment ruptures, as opposed to smaller single-segment events. Similarly, for variable degrees of roughness, ruptures may be arrested prematurely or may break the entire fault. In addition, fault roughness induces rupture incoherence that determines the level of high-frequency radiation. Using HPC-enabled dynamic-rupture simulations, we generate physically self-consistent rough-fault earthquake scenarios (M~6.8) and their associated near-source seismic radiation. Because these computations are too expensive to be conducted routinely for simulation-based seismic hazard assessment, we thrive to develop an effective pseudo-dynamic source characterization that produces (almost) the same ground-motion characteristics. Therefore, we examine how variable degrees of fault roughness affect rupture properties and the seismic wavefield, and develop a planar-fault kinematic source representation that emulates the observed dynamic behaviour. We propose an effective workflow for improved pseudo-dynamic source modelling that incorporates rough-fault effects and its associated high-frequency radiation in broadband ground-motion computation for simulation-based seismic hazard assessment.

  4. Comparison of Seismic Sources and Frequencies in West Texas

    NASA Astrophysics Data System (ADS)

    Kaip, G.; Harder, S. H.; Karplus, M. S.

    2017-12-01

    During October 2017 the Seismic Source Facility (SSF) located at the University of Texas at El Paso (UTEP) Department of Geological Sciences collected seismic data at SSF test facility located near Fabens, TX. The project objective was to compare source amplitudes and frequencies of various seismic sources available through the SSF. Selecting the appropriate seismic source is important to reach geological objectives. We compare seismic sources between explosive sources (pentolite and shotgun) and mechanical sources (accelerated weight drop and hammer on plate), focusing on amplitude and frequency. All sources were tested in same geologic environment. Although this is not an ideal geologic formation for source coupling, it does allow an "apples to apples" comparison. Twenty Reftek RT125A seismic recorders with 4.5 Hz geophones were laid out in a line with 3m station separation. Mechanical sources were tested first to minimize changes in the subsurface related to explosive sources Explosive sources, while yielding higher amplitudes, have lower frequency content. The explosions exhibit a higher signal-to-noise ratio, allowing us to recognize seismic energy deeper and farther from the source. Mechanical sources yield higher frequencies allowing better resolution at shallower depths, but have a lower signal-to-noise ratio and lower amplitudes, even with source stacking. We analyze the details of the shot spectra from the different types of sources. A combination of source types can improve data resolution and amplitude, thereby improving imaging potential. However, cost, logistics, and complexities also have a large influence on source selection.

  5. Analysis of Seismic Moment Tensor and Finite-Source Scaling During EGS Resource Development at The Geysers, CA

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Dreger, D. S.; Gritto, R.

    2015-12-01

    Enhanced Geothermal Systems (EGS) resource development requires knowledge of subsurface physical parameters to quantify the evolution of fracture networks. We investigate seismicity in the vicinity of the EGS development at The Geysers Prati-32 injection well to determine moment magnitude, focal mechanism, and kinematic finite-source models with the goal of developing a rupture area scaling relationship for the Geysers and specifically for the Prati-32 EGS injection experiment. Thus far we have analyzed moment tensors of M ≥ 2 events, and are developing the capability to analyze the large numbers of events occurring as a result of the fluid injection and to push the analysis to smaller magnitude earthquakes. We have also determined finite-source models for five events ranging in magnitude from M 3.7 to 4.5. The scaling relationship between rupture area and moment magnitude of these events resembles that of a published empirical relationship derived for events from M 4.5 to 8.3. We plan to develop a scaling relationship in which moment magnitude and corner frequency are predictor variables for source rupture area constrained by the finite-source modeling. Inclusion of corner frequency in the empirical scaling relationship is proposed to account for possible variations in stress drop. If successful, we will use this relationship to extrapolate to the large numbers of events in the EGS seismicity cloud to estimate the coseismic fracture density. We will present the moment tensor and corner frequency results for the micro earthquakes, and for select events, finite-source models. Stress drop inferred from corner frequencies and from finite-source modeling will be compared.

  6. Expected Seismicity and the Seismic Noise Environment of Europa

    NASA Astrophysics Data System (ADS)

    Panning, Mark P.; Stähler, Simon C.; Huang, Hsin-Hua; Vance, Steven D.; Kedar, Sharon; Tsai, Victor C.; Pike, William T.; Lorenz, Ralph D.

    2018-01-01

    Seismic data will be a vital geophysical constraint on internal structure of Europa if we land instruments on the surface. Quantifying expected seismic activity on Europa both in terms of large, recognizable signals and ambient background noise is important for understanding dynamics of the moon, as well as interpretation of potential future data. Seismic energy sources will likely include cracking in the ice shell and turbulent motion in the oceans. We define a range of models of seismic activity in Europa's ice shell by assuming each model follows a Gutenberg-Richter relationship with varying parameters. A range of cumulative seismic moment release between 1016 and 1018 Nm/yr is defined by scaling tidal dissipation energy to tectonic events on the Earth's moon. Random catalogs are generated and used to create synthetic continuous noise records through numerical wave propagation in thermodynamically self-consistent models of the interior structure of Europa. Spectral characteristics of the noise are calculated by determining probabilistic power spectral densities of the synthetic records. While the range of seismicity models predicts noise levels that vary by 80 dB, we show that most noise estimates are below the self-noise floor of high-frequency geophones but may be recorded by more sensitive instruments. The largest expected signals exceed background noise by ˜50 dB. Noise records may allow for constraints on interior structure through autocorrelation. Models of seismic noise generated by pressure variations at the base of the ice shell due to turbulent motions in the subsurface ocean may also generate observable seismic noise.

  7. SEISRISK II; a computer program for seismic hazard estimation

    USGS Publications Warehouse

    Bender, Bernice; Perkins, D.M.

    1982-01-01

    The computer program SEISRISK II calculates probabilistic ground motion values for use in seismic hazard mapping. SEISRISK II employs a model that allows earthquakes to occur as points within source zones and as finite-length ruptures along faults. It assumes that earthquake occurrences have a Poisson distribution, that occurrence rates remain constant during the time period considered, that ground motion resulting from an earthquake is a known function of magnitude and distance, that seismically homogeneous source zones are defined, that fault locations are known, that fault rupture lengths depend on magnitude, and that earthquake rates as a function of magnitude are specified for each source. SEISRISK II calculates for each site on a grid of sites the level of ground motion that has a specified probability of being exceeded during a given time period. The program was designed to process a large (essentially unlimited) number of sites and sources efficiently and has been used to produce regional and national maps of seismic hazard.}t is a substantial revision of an earlier program SEISRISK I, which has never been documented. SEISRISK II runs considerably [aster and gives more accurate results than the earlier program and in addition includes rupture length and acceleration variability which were not contained in the original version. We describe the model and how it is implemented in the computer program and provide a flowchart and listing of the code.

  8. Homogenized moment tensor and the effect of near-field heterogeneities on nonisotropic radiation in nuclear explosion

    NASA Astrophysics Data System (ADS)

    Burgos, Gaël.; Capdeville, Yann; Guillot, Laurent

    2016-06-01

    We investigate the effect of small-scale heterogeneities close to a seismic explosive source, at intermediate periods (20-50 s), with an emphasis on the resulting nonisotropic far-field radiation. First, using a direct numerical approach, we show that small-scale elastic heterogeneities located in the near-field of an explosive source, generate unexpected phases (i.e., long period S waves). We then demonstrate that the nonperiodic homogenization theory applied to 2-D and 3-D elastic models, with various pattern of small-scale heterogeneities near the source, leads to accurate waveforms at a reduced computational cost compared to direct modeling. Further, it gives an interpretation of how nearby small-scale features interact with the source at low frequencies, through an explicit correction to the seismic moment tensor. In 2-D simulations, we find a deviatoric contribution to the moment tensor, as high as 21% for near-source heterogeneities showing a 25% contrast of elastic values (relative to a homogeneous background medium). In 3-D this nonisotropic contribution reaches 27%. Second, we analyze intermediate-periods regional seismic waveforms associated with some underground nuclear explosions conducted at the Nevada National Security Site and invert for the full moment tensor, in order to quantify the relative contribution of the isotropic and deviatoric components of the tensor. The average value of the deviatoric part is about 35%. We conclude that the interactions between an explosive source and small-scale local heterogeneities of moderate amplitude may lead to a deviatoric contribution to the seismic moment, close to what is observed using regional data from nuclear test explosions.

  9. Challenges Ahead for Nuclear Facility Site-Specific Seismic Hazard Assessment in France: The Alternative Energies and the Atomic Energy Commission (CEA) Vision

    NASA Astrophysics Data System (ADS)

    Berge-Thierry, C.; Hollender, F.; Guyonnet-Benaize, C.; Baumont, D.; Ameri, G.; Bollinger, L.

    2017-09-01

    Seismic analysis in the context of nuclear safety in France is currently guided by a pure deterministic approach based on Basic Safety Rule ( Règle Fondamentale de Sûreté) RFS 2001-01 for seismic hazard assessment, and on the ASN/2/01 Guide that provides design rules for nuclear civil engineering structures. After the 2011 Tohohu earthquake, nuclear operators worldwide were asked to estimate the ability of their facilities to sustain extreme seismic loads. The French licensees then defined the `hard core seismic levels', which are higher than those considered for design or re-assessment of the safety of a facility. These were initially established on a deterministic basis, and they have been finally justified through state-of-the-art probabilistic seismic hazard assessments. The appreciation and propagation of uncertainties when assessing seismic hazard in France have changed considerably over the past 15 years. This evolution provided the motivation for the present article, the objectives of which are threefold: (1) to provide a description of the current practices in France to assess seismic hazard in terms of nuclear safety; (2) to discuss and highlight the sources of uncertainties and their treatment; and (3) to use a specific case study to illustrate how extended source modeling can help to constrain the key assumptions or parameters that impact upon seismic hazard assessment. This article discusses in particular seismic source characterization, strong ground motion prediction, and maximal magnitude constraints, according to the practice of the French Atomic Energy Commission. Due to increases in strong motion databases in terms of the number and quality of the records in their metadata and the uncertainty characterization, several recently published empirical ground motion prediction models are eligible for seismic hazard assessment in France. We show that propagation of epistemic and aleatory uncertainties is feasible in a deterministic approach, as in a probabilistic way. Assessment of seismic hazard in France in the framework of the safety of nuclear facilities should consider these recent advances. In this sense, the opening of discussions with all of the stakeholders in France to update the reference documents (i.e., RFS 2001-01; ASN/2/01 Guide) appears appropriate in the short term.

  10. The 2014 update to the National Seismic Hazard Model in California

    USGS Publications Warehouse

    Powers, Peter; Field, Edward H.

    2015-01-01

    The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.

  11. Three-dimensional ground-motion simulations of earthquakes for the Hanford area, Washington

    USGS Publications Warehouse

    Frankel, Arthur; Thorne, Paul; Rohay, Alan

    2014-01-01

    This report describes the results of ground-motion simulations of earthquakes using three-dimensional (3D) and one-dimensional (1D) crustal models conducted for the probabilistic seismic hazard assessment (PSHA) of the Hanford facility, Washington, under the Senior Seismic Hazard Analysis Committee (SSHAC) guidelines. The first portion of this report demonstrates that the 3D seismic velocity model for the area produces synthetic seismograms with characteristics (spectral response values, duration) that better match those of the observed recordings of local earthquakes, compared to a 1D model with horizontal layers. The second part of the report compares the response spectra of synthetics from 3D and 1D models for moment magnitude (M) 6.6–6.8 earthquakes on three nearby faults and for a dipping plane wave source meant to approximate regional S-waves from a Cascadia great earthquake. The 1D models are specific to each site used for the PSHA. The use of the 3D model produces spectral response accelerations at periods of 0.5–2.0 seconds as much as a factor of 4.5 greater than those from the 1D models for the crustal fault sources. The spectral accelerations of the 3D synthetics for the Cascadia plane-wave source are as much as a factor of 9 greater than those from the 1D models. The differences between the spectral accelerations for the 3D and 1D models are most pronounced for sites with thicker supra-basalt sediments and for stations with earthquakes on the Rattlesnake Hills fault and for the Cascadia plane-wave source.

  12. Coda Q Attenuation and Source Parameters Analysis in North East India Using Local Earthquakes

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, W. K.; Earthquake Seismology

    2010-12-01

    Alok Kumar Mohapatra1* and William Kumar Mohanty1 *Corresponding author: alokgpiitkgp@gmail.com 1Department of Geology and Geophysics, Indian Institute of Technology, Kharagpur, West Bengal, India. Pin-721302 ABSTRACT In the present study, the quality factor of coda waves (Qc) and the source parameters has been estimated for the Northeastern India, using the digital data of ten local earthquakes from April 2001 to November 2002. Earthquakes with magnitude range from 3.8 to 4.9 have been taken into account. The time domain coda decay method of a single back scattering model is used to calculate frequency dependent values of Coda Q (Qc) where as, the source parameters like seismic moment(Mo), stress drop, source radius(r), radiant energy(Wo),and strain drop are estimated using displacement amplitude spectrum of body wave using Brune's model. The earthquakes with magnitude range 3.8 to 4.9 have been used for estimation Qc at six central frequencies 1.5 Hz, 3.0 Hz, 6.0 Hz, 9.0 Hz, 12.0 Hz, and 18.0 Hz. In the present work, the Qc value of local earthquakes are estimated to understand the attenuation characteristic, source parameters and tectonic activity of the region. Based on a criteria of homogeneity in the geological characteristics and the constrains imposed by the distribution of available events the study region has been classified into three zones such as the Tibetan Plateau Zone (TPZ), Bengal Alluvium and Arakan-Yuma Zone (BAZ), Shillong Plateau Zone (SPZ). It follows the power law Qc= Qo (f/fo)n where, Qo is the quality factor at the reference frequency (1Hz) fo and n is the frequency parameter which varies from region to region. The mean values of Qc reveals a dependence on frequency, varying from 292.9 at 1.5 Hz to 4880.1 at 18 Hz. Average frequency dependent relationship Qc values obtained of the Northeastern India is 198 f 1.035, while this relationship varies from the region to region such as, Tibetan Plateau Zone (TPZ): Qc= 226 f 1.11, Bengal Alluvium and Arakan-Yuma Zone (BAZ) : Qc= 301 f 0.87, Shillong Plateau Zone (SPZ): Qc=126 fo 0.85. It indicates Northeastern India is seismically active but comparing of all zones in the study region the Shillong Plateau Zone (SPZ): Qc= 126 f 0.85 is seismically most active. Where as the Bengal Alluvium and Arakan-Yuma Zone (BAZ) are less active and out of three the Tibetan Plateau Zone (TPZ)is intermediate active. This study may be useful for the seismic hazard assessment. The estimated seismic moments (Mo), range from 5.98×1020 to 3.88×1023 dyne-cm. The source radii(r) are confined between 152 to 1750 meter, the stress drop ranges between 0.0003×103 bar to 1.04×103 bar, the average radiant energy is 82.57×1018 ergs and the strain drop for the earthquake ranges from 0.00602×10-9 to 2.48×10-9 respectively. The estimated stress drop values for NE India depicts scattered nature of the larger seismic moment value whereas, they show a more systematic nature for smaller seismic moment values. The estimated source parameters are in agreement to previous works in this type of tectonic set up. Key words: Coda wave, Seismic source parameters, Lapse time, single back scattering model, Brune's model, Stress drop and North East India.

  13. New perspectives on self-similarity for shallow thrust earthquakes

    NASA Astrophysics Data System (ADS)

    Denolle, Marine A.; Shearer, Peter M.

    2016-09-01

    Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.

  14. Berkeley Seismological Laboratory Seismic Moment Tensor Report for the August 6, 2007 M3.9 Seismic event in central Utah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, S; Dreger, D; Hellweg, P

    2007-08-08

    We have performed a complete moment tensor analysis of the seismic event, which occurred on Monday August 6, 2007 at 08:48:40 UTC 21 km from Mt.Pleasant, Utah. In our analysis we utilized complete three-component seismic records recorded by the USArray, University of Utah, and EarthScope seismic arrays. The seismic waveform data was integrated to displacement and filtered between 0.02 to 0.10 Hz following instrument removal. We used the Song et al. (1996) velocity model to compute Green's functions used in the moment tensor inversion. A map of the stations we used and the location of the event is shown inmore » Figure 1. In our moment tensor analysis we assumed a shallow source depth of 1 km consistent with the shallow depth reported for this event. As shown in Figure 2 the results point to a source mechanism with negligible double-couple radiation and is composed of dominant CLVD and implosive isotropic components. The total scalar seismic moment is 2.12e22 dyne cm corresponding to a moment magnitude (Mw) of 4.2. The long-period records are very well matched by the model (Figure 2) with a variance reduction of 73.4%. An all dilational (down) first motion radiation pattern is predicted by the moment tensor solution, and observations of first motions are in agreement.« less

  15. Neo-Deterministic Seismic Hazard Assessment at Watts Bar Nuclear Power Plant Site, Tennessee, USA

    NASA Astrophysics Data System (ADS)

    Brandmayr, E.; Cameron, C.; Vaccari, F.; Fasan, M.; Romanelli, F.; Magrin, A.; Vlahovic, G.

    2017-12-01

    Watts Bar Nuclear Power Plant (WBNPP) is located within the Eastern Tennessee Seismic Zone (ETSZ), the second most naturally active seismic zone in the US east of the Rocky Mountains. The largest instrumental earthquakes in the ETSZ are M 4.6, although paleoseismic evidence supports events of M≥6.5. Events are mainly strike-slip and occur on steeply dipping planes at an average depth of 13 km. In this work, we apply the neo-deterministic seismic hazard assessment to estimate the potential seismic input at the plant site, which has been recently targeted by the Nuclear Regulatory Commission for a seismic hazard reevaluation. First, we perform a parametric test on some seismic source characteristics (i.e. distance, depth, strike, dip and rake) using a one-dimensional regional bedrock model to define the most conservative scenario earthquakes. Then, for the selected scenario earthquakes, the estimate of the ground motion input at WBNPP is refined using a two-dimensional local structural model (based on the plant's operator documentation) with topography, thus looking for site amplification and different possible rupture processes at the source. WBNNP features a safe shutdown earthquake (SSE) design with PGA of 0.18 g and maximum spectral amplification (SA, 5% damped) of 0.46 g (at periods between 0.15 and 0.5 s). Our results suggest that, although for most of the considered scenarios the PGA is relatively low, SSE values can be reached and exceeded in the case of the most conservative scenario earthquakes.

  16. Impact of Topography on Seismic Amplification During the 2005 Kashmir Earthquake

    NASA Astrophysics Data System (ADS)

    Khan, S.; van der Meijde, M.; van der Werff, H.; Shafique, M.

    2016-12-01

    This study assesses topographic amplification of seismic response during the 2005 Kashmir Earthquake in northern Pakistan. Topography scatters seismic waves, which causes variation in seismic response on the surface of the earth. During the Kashmir earthquake, topography induced amplification was suspected to have had major influence on the damage of infrastructure. We did a 3-dimensional simulation of the event using SPECFEM3D software. We first analyzed the impact of data resolution (mesh and Digital Elevation Model) on the derived seismic response. ASTER GDEM elevation data was used to build a 3D finite element mesh, and the parameters (latitude, longitude, depth, moment tensor) of the Kashmir earthquake were used in simulating the event. Our results show amplification of seismic response on ridges and de-amplification in valleys. It was also found that slopes facing away from the source receive an amplified seismic response when compared to slopes facing towards the source. The PGD would regularly fall within the range 0.23-5.8 meters. The topographic amplification causes local changes in the range of -2.50 to +3.50 meters; causing the PGD to fall in the range of 0.36-7.85 meters.

  17. New approach to analysis of strongest earthquakes with upper-value magnitude in subduction zones and induced by them catastrophic tsunamis on examples of catastrophic events in 21 century

    NASA Astrophysics Data System (ADS)

    Garagash, I. A.; Lobkovsky, L. I.; Mazova, R. Kh.

    2012-04-01

    The study of generation of strongest earthquakes with upper-value magnitude (near above 9) and induced by them catastrophic tsunamis, is performed by authors on the basis of new approach to the generation process, occurring in subduction zones under earthquake. The necessity of performing of such studies is connected with recent 11 March 2011 catastrophic underwater earthquake close to north-east Japan coastline and following it catastrophic tsunami which had led to vast victims and colossal damage for Japan. The essential importance in this study is determined by unexpected for all specialists the strength of earthquake occurred (determined by magnitude M = 9), inducing strongest tsunami with wave height runup on the beach up to 10 meters. The elaborated by us model of interaction of ocean lithosphere with island-arc blocks in subduction zones, with taking into account of incomplete stress discharge at realization of seismic process and further accumulation of elastic energy, permits to explain arising of strongest mega-earthquakes, such as catastrophic earthquake with source in Japan deep-sea trench in March, 2011. In our model, the wide possibility for numerical simulation of dynamical behaviour of underwater seismic source is provided by kinematical model of seismic source as well as by elaborated by authors numerical program for calculation of tsunami wave generation by dynamical and kinematical seismic sources. The method obtained permits take into account the contribution of residual tectonic stress in lithosphere plates, leading to increase of earthquake energy, which is usually not taken into account up to date.

  18. Using Seismic Interferometry to Investigate Seismic Swarms

    NASA Astrophysics Data System (ADS)

    Matzel, E.; Morency, C.; Templeton, D. C.

    2017-12-01

    Seismicity provides a direct means of measuring the physical characteristics of active tectonic features such as fault zones. Hundreds of small earthquakes often occur along a fault during a seismic swarm. This seismicity helps define the tectonically active region. When processed using novel geophysical techniques, we can isolate the energy sensitive to the fault, itself. Here we focus on two methods of seismic interferometry, ambient noise correlation (ANC) and the virtual seismometer method (VSM). ANC is based on the observation that the Earth's background noise includes coherent energy, which can be recovered by observing over long time periods and allowing the incoherent energy to cancel out. The cross correlation of ambient noise between a pair of stations results in a waveform that is identical to the seismogram that would result if an impulsive source located at one of the stations was recorded at the other, the Green function (GF). The calculation of the GF is often stable after a few weeks of continuous data correlation, any perturbations to the GF after that point are directly related to changes in the subsurface and can be used for 4D monitoring.VSM is a style of seismic interferometry that provides fast, precise, high frequency estimates of the Green's function (GF) between earthquakes. VSM illuminates the subsurface precisely where the pressures are changing and has the potential to image the evolution of seismicity over time, including changes in the style of faulting. With hundreds of earthquakes, we can calculate thousands of waveforms. At the same time, VSM collapses the computational domain, often by 2-3 orders of magnitude. This allows us to do high frequency 3D modeling in the fault region. Using data from a swarm of earthquakes near the Salton Sea, we demonstrate the power of these techniques, illustrating our ability to scale from the far field, where sources are well separated, to the near field where their locations fall within each other's uncertainty ellipse. We use ANC to create a 3D model of the crust in the region. VSM provides better illumination of the active fault zone. Measures of amplitude and shape are used to refine source properties and locations in space and waveform modeling allows us to estimate near-fault seismic structure.

  19. The 2008 U.S. Geological Survey national seismic hazard models and maps for the central and eastern United States

    USGS Publications Warehouse

    Petersen, Mark D.; Frankel, Arthur D.; Harmsen, Stephen C.; Mueller, Charles S.; Boyd, Oliver S.; Luco, Nicolas; Wheeler, Russell L.; Rukstales, Kenneth S.; Haller, Kathleen M.

    2012-01-01

    In this paper, we describe the scientific basis for the source and ground-motion models applied in the 2008 National Seismic Hazard Maps, the development of new products that are used for building design and risk analyses, relationships between the hazard maps and design maps used in building codes, and potential future improvements to the hazard maps.

  20. Evaluation of the Seismic Hazard in Venezuela with a revised seismic catalog that seeks for harmonization along the country borders

    NASA Astrophysics Data System (ADS)

    Rendon, H.; Alvarado, L.; Paolini, M.; Olbrich, F.; González, J.; Ascanio, W.

    2013-05-01

    Probabilistic Seismic Hazard Assessment is a complex endeavor that relies on the quality of the information that comes from different sources: the seismic catalog, active faults parameters, strain rates, etc. Having this in mind, during the last several months, the FUNVISIS seismic hazard group has been working on a review and update of the local data base that form the basis for a reliable PSHA calculation. In particular, the seismic catalog, which provides the necessary information that allows the evaluation of the critical b-value, which controls how seismic occurrence distributes with magnitude, has received particular attention. The seismic catalog is the result of the effort of several generations of researchers along the years; therefore, the catalog necessarily suffers from the lack of consistency, homogeneity and completeness for all ranges of magnitude over any seismic study area. Merging the FUNVISIS instrumental catalog with the ones obtained from international agencies, we present the work that we have been doing to produce a consistent seismic catalog that covers Venezuela entirely, with seismic events starting from 1910 until 2012, and report the magnitude of completeness for the different periods. Also, we present preliminary results on the Seismic Hazard evaluation that takes into account such instrumental catalog, the historical catalog, updated known fault geometries and its correspondent parameters, and the new seismic sources that have been defined accordingly. Within the spirit of the Global Earthquake Model (GEM), all these efforts look for possible bridges with neighboring countries to establish consistent hazard maps across the borders.

  1. The 2012 Ferrara seismic sequence: Regional crustal structure, earthquake sources, and seismic hazard

    NASA Astrophysics Data System (ADS)

    Malagnini, Luca; Herrmann, Robert B.; Munafò, Irene; Buttinelli, Mauro; Anselmi, Mario; Akinci, Aybige; Boschi, E.

    2012-10-01

    Inadequate seismic design codes can be dangerous, particularly when they underestimate the true hazard. In this study we use data from a sequence of moderate-sized earthquakes in northeast Italy to validate and test a regional wave propagation model which, in turn, is used to understand some weaknesses of the current design spectra. Our velocity model, while regionalized and somewhat ad hoc, is consistent with geophysical observations and the local geology. In the 0.02-0.1 Hz band, this model is validated by using it to calculate moment tensor solutions of 20 earthquakes (5.6 ≥ MW ≥ 3.2) in the 2012 Ferrara, Italy, seismic sequence. The seismic spectra observed for the relatively small main shock significantly exceeded the design spectra to be used in the area for critical structures. Observations and synthetics reveal that the ground motions are dominated by long-duration surface waves, which, apparently, the design codes do not adequately anticipate. In light of our results, the present seismic hazard assessment in the entire Pianura Padana, including the city of Milan, needs to be re-evaluated.

  2. Radiated Seismic Energy of Earthquakes in the South-Central Region of the Gulf of California, Mexico

    NASA Astrophysics Data System (ADS)

    Castro, Raúl R.; Mendoza-Camberos, Antonio; Pérez-Vertti, Arturo

    2018-05-01

    We estimated the radiated seismic energy (ES) of 65 earthquakes located in the south-central region of the Gulf of California. Most of these events occurred along active transform faults that define the Pacific-North America plate boundary and have magnitudes between M3.3 and M5.9. We corrected the spectral records for attenuation using nonparametric S-wave attenuation functions determined with the whole data set. The path effects were isolated from the seismic source using a spectral inversion. We computed radiated seismic energy of the earthquakes by integrating the square velocity source spectrum and estimated their apparent stresses. We found that most events have apparent stress between 3 × 10-4 and 3 MPa. Model independent estimates of the ratio between seismic energy and moment (ES/M0) indicates that this ratio is independent of earthquake size. We conclude that in general the apparent stress is low (σa < 3 MPa) in the south-central and southern Gulf of California.

  3. Seismic shaking in the North China Basin expected from ruptures of a possible seismic gap

    NASA Astrophysics Data System (ADS)

    Duan, Benchun; Liu, Dunyu; Yin, An

    2017-05-01

    A 160 km long seismic gap, which has not been ruptured over 8000 years, was identified recently in North China. In this study, we use a dynamic source model and a newly available high-resolution 3-D velocity structure to simulate long-period ground motion (up to 0.5 Hz) from possibly worst case rupture scenarios of the seismic gap. We find that the characteristics of the earthquake source and the local geologic structure play a critical role in controlling the amplitude and distribution of the simulated strong ground shaking. Rupture directivity and slip asperities can result in large-amplitude (i.e., >1 m/s) ground shaking near the fault, whereas long-duration shaking may occur within sedimentary basins. In particular, a deep and closed Quaternary basin between Beijing and Tianjin can lead to ground shaking of several tens of cm/s for more than 1 min. These results may provide a sound basis for seismic mitigation in one of the most populated regions in the world.

  4. USGS National Seismic Hazard Maps

    USGS Publications Warehouse

    Frankel, A.D.; Mueller, C.S.; Barnhard, T.P.; Leyendecker, E.V.; Wesson, R.L.; Harmsen, S.C.; Klein, F.W.; Perkins, D.M.; Dickman, N.C.; Hanson, S.L.; Hopper, M.G.

    2000-01-01

    The U.S. Geological Survey (USGS) recently completed new probabilistic seismic hazard maps for the United States, including Alaska and Hawaii. These hazard maps form the basis of the probabilistic component of the design maps used in the 1997 edition of the NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, prepared by the Building Seismic Safety Council arid published by FEMA. The hazard maps depict peak horizontal ground acceleration and spectral response at 0.2, 0.3, and 1.0 sec periods, with 10%, 5%, and 2% probabilities of exceedance in 50 years, corresponding to return times of about 500, 1000, and 2500 years, respectively. In this paper we outline the methodology used to construct the hazard maps. There are three basic components to the maps. First, we use spatially smoothed historic seismicity as one portion of the hazard calculation. In this model, we apply the general observation that moderate and large earthquakes tend to occur near areas of previous small or moderate events, with some notable exceptions. Second, we consider large background source zones based on broad geologic criteria to quantify hazard in areas with little or no historic seismicity, but with the potential for generating large events. Third, we include the hazard from specific fault sources. We use about 450 faults in the western United States (WUS) and derive recurrence times from either geologic slip rates or the dating of pre-historic earthquakes from trenching of faults or other paleoseismic methods. Recurrence estimates for large earthquakes in New Madrid and Charleston, South Carolina, were taken from recent paleoliquefaction studies. We used logic trees to incorporate different seismicity models, fault recurrence models, Cascadia great earthquake scenarios, and ground-motion attenuation relations. We present disaggregation plots showing the contribution to hazard at four cities from potential earthquakes with various magnitudes and distances.

  5. numerical broadband modelling of ocean waves, from 1 to 300 s: implications for seismic wave sources and wave climate studies

    NASA Astrophysics Data System (ADS)

    Ardhuin, F.; Stutzmann, E.; Gualtieri, L.

    2014-12-01

    Ocean waves provide most of the energy that feeds the continuous vertical oscillations of the solid Earth. Three period bands are usually identified. The hum contains periods longer than 30 s, and the primary and secondary peaks are usually centered around 15 and 5 s, respectively. Motions in all three bands are recorded everywhere on our planet and can provide information on both the solid Earth structure and the ocean wave climate over the past century. Here we describe recent efforts to extend the range of validity of ocean wave models to cover periods from 1 to 300 s (Ardhuin et al., Ocean Modelling 2014), and the resulting public database of ocean wave spectra (http://tinyurl.com/iowagaftp/HINDCAST/ ). We particularly discuss the sources of uncertainty for building a numerical model of acoustic and seismic noise on this knowledge of ocean wave spectra. For acoustic periods shorter than 3 seconds, the main uncertainties are the directional distributions of wave energy (Ardhuin et al., J. Acoust. Soc. Amer. 2013). For intermediate periods (3 to 25 s), the propagation properties of seismic waves are probably the main source of error when producing synthetic spectra of Rayleigh waves (Ardhuin et al. JGR 2011, Stutzmann et al. GJI 2012). For the longer periods (25 to 300 s), the poor knowledge of the bottom topography details may be the limiting factor for estimating hum spectra or inverting hum measurements in properties of the infragravity wave field. All in all, the space and time variability of recorded seismic and acoustic spectra is generally well reproduced in the band 3 to 300 s, and work on shorter periods is under way. This direct model can be used to search for missing noise sources, such as wave scattering in the marginal ice zone, find events relevant for solid earth studies (e.g. Obrebski et al. JGR 2013) or invert wave climate properties from microseismic records. The figure shows measured spectra of the vertical ground acceleration, and modeled result for the primary and secondary mechanisms using our numerical wave model. (a) Median ground acceleration power spectra (LHZ channel) at the SSB seismic station (Geoscope Network), for the month of January 2008. (b) Spectrogram of modeled ground displacement and (c) measured spectrogram.

  6. W17_geonuc “Application of the Spectral Element Method to improvement of Ground-based Nuclear Explosion Monitoring”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Rougier, Esteban; Lei, Zhou

    This project is in support of the Source Physics Experiment SPE (Snelson et al. 2013), which aims to develop new seismic source models of explosions. One priority of this program is first principle numerical modeling to validate and extend current empirical models.

  7. Source Analysis of the Crandall Canyon, Utah, Mine Collapse

    DOE PAGES

    Dreger, D. S.; Ford, S. R.; Walter, W. R.

    2008-07-11

    Analysis of seismograms from a magnitude 3.9 seismic event on August 6, 2007 in central Utah reveals an anomalous radiation pattern that is contrary to that expected for a tectonic earthquake, and which is dominated by an implosive component. The results show the seismic event is best modeled as a shallow underground collapse. Interestingly, large transverse surface waves require a smaller additional non-collapse source component that represents either faulting in the rocks above the mine workings or deformation of the medium surrounding the mine.

  8. Probabilistic Seismic Hazard Assessment for Himalayan-Tibetan Region from Historical and Instrumental Earthquake Catalogs

    NASA Astrophysics Data System (ADS)

    Rahman, M. Moklesur; Bai, Ling; Khan, Nangyal Ghani; Li, Guohui

    2018-02-01

    The Himalayan-Tibetan region has a long history of devastating earthquakes with wide-spread casualties and socio-economic damages. Here, we conduct the probabilistic seismic hazard analysis by incorporating the incomplete historical earthquake records along with the instrumental earthquake catalogs for the Himalayan-Tibetan region. Historical earthquake records back to more than 1000 years ago and an updated, homogenized and declustered instrumental earthquake catalog since 1906 are utilized. The essential seismicity parameters, namely, the mean seismicity rate γ, the Gutenberg-Richter b value, and the maximum expected magnitude M max are estimated using the maximum likelihood algorithm assuming the incompleteness of the catalog. To compute the hazard value, three seismogenic source models (smoothed gridded, linear, and areal sources) and two sets of ground motion prediction equations are combined by means of a logic tree on accounting the epistemic uncertainties. The peak ground acceleration (PGA) and spectral acceleration (SA) at 0.2 and 1.0 s are predicted for 2 and 10% probabilities of exceedance over 50 years assuming bedrock condition. The resulting PGA and SA maps show a significant spatio-temporal variation in the hazard values. In general, hazard value is found to be much higher than the previous studies for regions, where great earthquakes have actually occurred. The use of the historical and instrumental earthquake catalogs in combination of multiple seismogenic source models provides better seismic hazard constraints for the Himalayan-Tibetan region.

  9. Calibration of Seismic Sources during a Test Cruise with the new RV SONNE

    NASA Astrophysics Data System (ADS)

    Engels, M.; Schnabel, M.; Damm, V.

    2015-12-01

    During autumn 2014, several test cruises of the brand new German research vessel SONNE were carried out before the first official scientific cruise started in December. In September 2014, BGR conducted a seismic test cruise in the British North Sea. RV SONNE is a multipurpose research vessel and was also designed for the mobile BGR 3D seismic equipment, which was tested successfully during the cruise. We spend two days for calibration of the following seismic sources of BGR: G-gun array (50 l @ 150 bar) G-gun array (50 l @ 207 bar) single GI-gun (3.4 l @ 150 bar) For this experiment two hydrophones (TC4042 from Reson Teledyne) sampling up to 48 kHz were fixed below a drifting buoy at 20 m and 60 m water depth - the sea bottom was at 80 m depth. The vessel with the seismic sources sailed several up to 7 km long profiles around the buoy in order to cover many different azimuths and distances. We aimed to measure sound pressure level (SPL) and sound exposure level (SEL) under the conditions of the shallow North Sea. Total reflections and refracted waves dominate the recorded wave field, enhance the noise level and partly screen the direct wave in contrast to 'true' deep water calibration based solely on the direct wave. Presented are SPL and RMS power results in time domain, the decay with distance along profiles, and the somehow complicated 2D sound radiation pattern modulated by topography. The shading effect of the vessel's hull is significant. In frequency domain we consider 1/3 octave levels and estimate the amount of energy in frequency ranges not used for reflection seismic processing. Results are presented in comparison of the three different sources listed above. We compare the measured SPL decay with distance during this experiment with deep water modeling of seismic sources (Gundalf software) and with published results from calibrations with other marine seismic sources under different conditions: E.g. Breitzke et al. (2008, 2010) with RV Polarstern, Tolstoy et al. (2004) with RV Ewing and Tolstoy et al. (2009) with RV Langseth, and Crone et al. (2014) with RV Langseth.

  10. Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles

    2016-01-01

    Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.

  11. Parameter Prediction of Hydraulic Fracture for Tight Reservoir Based on Micro-Seismic and History Matching

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Ma, Xiaopeng; Li, Yanlai; Wu, Haiyang; Cui, Chenyu; Zhang, Xiaoming; Zhang, Hao; Yao, Jun

    Hydraulic fracturing is an important measure for the development of tight reservoirs. In order to describe the distribution of hydraulic fractures, micro-seismic diagnostic was introduced into petroleum fields. Micro-seismic events may reveal important information about static characteristics of hydraulic fracturing. However, this method is limited to reflect the distribution area of the hydraulic fractures and fails to provide specific parameters. Therefore, micro-seismic technology is integrated with history matching to predict the hydraulic fracture parameters in this paper. Micro-seismic source location is used to describe the basic shape of hydraulic fractures. After that, secondary modeling is considered to calibrate the parameters information of hydraulic fractures by using DFM (discrete fracture model) and history matching method. In consideration of fractal feature of hydraulic fracture, fractal fracture network model is established to evaluate this method in numerical experiment. The results clearly show the effectiveness of the proposed approach to estimate the parameters of hydraulic fractures.

  12. The source of infrasound associated with long-period events at mount St. Helens

    USGS Publications Warehouse

    Matoza, R.S.; Garces, M.A.; Chouet, B.A.; D'Auria, L.; Hedlin, M.A.H.; De Groot-Hedlin, C.; Waite, G.P.

    2009-01-01

    During the early stages of the 2004-2008 Mount St. Helens eruption, the source process that produced a sustained sequence of repetitive long-period (LP) seismic events also produced impulsive broadband infrasonic signals in the atmosphere. To assess whether the signals could be generated simply by seismic-acoustic coupling from the shallow LP events, we perform finite difference simulation of the seismo-acoustic wavefield using a single numerical scheme for the elastic ground and atmosphere. The effects of topography, velocity structure, wind, and source configuration are considered. The simulations show that a shallow source buried in a homogeneous elastic solid produces a complex wave train in the atmosphere consisting of P/SV and Rayleigh wave energy converted locally along the propagation path, and acoustic energy originating from , the source epicenter. Although the horizontal acoustic velocity of the latter is consistent with our data, the modeled amplitude ratios of pressure to vertical seismic velocity are too low in comparison with observations, and the characteristic differences in seismic and acoustic waveforms and spectra cannot be reproduced from a common point source. The observations therefore require a more complex source process in which the infrasonic signals are a record of only the broadband pressure excitation mechanism of the seismic LP events. The observations and numerical results can be explained by a model involving the repeated rapid pressure loss from a hydrothermal crack by venting into a shallow layer of loosely consolidated, highly permeable material. Heating by magmatic activity causes pressure to rise, periodically reaching the pressure threshold for rupture of the "valve" sealing the crack. Sudden opening of the valve generates the broadband infrasonic signal and simultaneously triggers the collapse of the crack, initiating resonance of the remaining fluid. Subtle waveform and amplitude variability of the infrasonic signals as recorded at an array 13.4 km to the NW of the volcano are attributed primarily to atmospheric boundary layer propagation effects, superimposed upon amplitude changes at the source. Copyright 2009 by the American Geophysical Union.

  13. GRACE gravity data help constraining seismic models of the 2004 Sumatran earthquake

    NASA Astrophysics Data System (ADS)

    Cambiotti, G.; Bordoni, A.; Sabadini, R.; Colli, L.

    2011-10-01

    The analysis of Gravity Recovery and Climate Experiment (GRACE) Level 2 data time series from the Center for Space Research (CSR) and GeoForschungsZentrum (GFZ) allows us to extract a new estimate of the co-seismic gravity signal due to the 2004 Sumatran earthquake. Owing to compressible self-gravitating Earth models, including sea level feedback in a new self-consistent way and designed to compute gravitational perturbations due to volume changes separately, we are able to prove that the asymmetry in the co-seismic gravity pattern, in which the north-eastern negative anomaly is twice as large as the south-western positive anomaly, is not due to the previously overestimated dilatation in the crust. The overestimate was due to a large dilatation localized at the fault discontinuity, the gravitational effect of which is compensated by an opposite contribution from topography due to the uplifted crust. After this localized dilatation is removed, we instead predict compression in the footwall and dilatation in the hanging wall. The overall anomaly is then mainly due to the additional gravitational effects of the ocean after water is displaced away from the uplifted crust, as first indicated by de Linage et al. (2009). We also detail the differences between compressible and incompressible material properties. By focusing on the most robust estimates from GRACE data, consisting of the peak-to-peak gravity anomaly and an asymmetry coefficient, that is given by the ratio of the negative gravity anomaly over the positive anomaly, we show that they are quite sensitive to seismic source depths and dip angles. This allows us to exploit space gravity data for the first time to help constraining centroid-momentum-tensor (CMT) source analyses of the 2004 Sumatran earthquake and to conclude that the seismic moment has been released mainly in the lower crust rather than the lithospheric mantle. Thus, GRACE data and CMT source analyses, as well as geodetic slip distributions aided by GPS, complement each other for a robust inference of the seismic source of large earthquakes. Particular care is devoted to the spatial filtering of the gravity anomalies estimated both from observations and models to make their comparison significant.

  14. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate melts, and the re-assimilation of crust and mantle lithosphere by rising melt. Interpretations of seismic tomography, petrologic and geochemical observations, and all three together, are ambiguous, and this needs to be emphasized more in presenting interpretations so that the viability of the models can be assessed more reliably.

  15. Seismic processes and migration of magma during the Great Tolbachik Fissure Eruption of 1975-1976 and Tolbachik Fissure Eruption of 2012-2013, Kamchatka Peninsula

    NASA Astrophysics Data System (ADS)

    Fedotov, S. A.; Slavina, L. B.; Senyukov, S. L.; Kuchay, M. S.

    2015-12-01

    Seismic and volcanic processes in the area of the northern group of volcanoes (NGV) in Kamchatka Peninsula that accompanied the Great Tolbachik Fissure Eruption (GTFE) of 1975-1976 and the Tolbachik Fissure Eruption (TFE, or "50 let IViS" due to anniversary of the Institute of Volcanology and Seismology, Far East Branch, Russian Academy of Sciences) of 2012-2013 and the seismic activity between these events are considered. The features of evolution of seismic processes of the major NGV volcanoes (Ploskii Tolbachik, Klyuchevskoy, Bezymannyi, and Shiveluch) are revealed. The distribution of earthquakes along depth, their spatial and temporal migration, and the relation of seismic and volcanic activity are discussed. The major features of seismic activity during the GTFE preparation and evolution and a development of earthquake series preceding the origin of the northern and southern breaks are described. The character of seismic activity between the GTFE and TFE is shown. The major peculiarities of evolution of seismic activity preceding and accompanying the TFE are described. The major magma sources and conduits of the NGV volcanoes are identified, as is the existence of a main conduit in the mantle and a common intermediate source for the entire NGV, the depth of which is 25-35 km according to seismic data. The depth of a neutral buoyancy layer below the NGV is 15-20 km and the source of areal volcanism of magnesian basalts northeast of the Klyuchevskoy volcano is located at depth of ~20 km. These data support the major properties of a 2010 geophysical model of magmatic feeding system of the Klyuchevskoy group of volcanoes. The present paper covers a wider NGV area and is based on the real experimental observations.

  16. Seismic source functions from free-field ground motions recorded on SPE: Implications for source models of small, shallow explosions

    NASA Astrophysics Data System (ADS)

    Rougier, Esteban; Patton, Howard J.

    2015-05-01

    Reduced displacement potentials (RDPs) for chemical explosions of the Source Physics Experiments (SPE) in granite at the Nevada Nuclear Security Site are estimated from free-field ground motion recordings. Far-field P wave source functions are proportional to the time derivative of RDPs. Frequency domain comparisons between measured source functions and model predictions show that high-frequency amplitudes roll off as ω- 2, but models fail to predict the observed seismic moment, corner frequency, and spectral overshoot. All three features are fit satisfactorily for the SPE-2 test after cavity radius Rc is reduced by 12%, elastic radius is reduced by 58%, and peak-to-static pressure ratio on the elastic radius is increased by 100%, all with respect to the Mueller-Murphy model modified with the Denny-Johnson Rc scaling law. A large discrepancy is found between the cavity volume inferred from RDPs and the volume estimated from laser scans of the emplacement hole. The measurements imply a scaled Rc of ~5 m/kt1/3, more than a factor of 2 smaller than nuclear explosions. Less than 25% of the seismic moment can be attributed to cavity formation. A breakdown of the incompressibility assumption due to shear dilatancy of the source medium around the cavity is the likely explanation. New formulas are developed for volume changes due to medium bulking (or compaction). A 0.04% decrease of average density inside the elastic radius accounts for the missing volumetric moment. Assuming incompressibility, established Rc scaling laws predicted the moment reasonable well, but it was only fortuitous because dilation of the source medium compensated for the small cavity volume.

  17. Infrasound Generation from the HH Seismic Hammer.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Kyle Richard

    2014-10-01

    The HH Seismic hammer is a large, "weight-drop" source for active source seismic experiments. This system provides a repetitive source that can be stacked for subsurface imaging and exploration studies. Although the seismic hammer was designed for seismological studies it was surmised that it might produce energy in the infrasonic frequency range due to the ground motion generated by the 13 metric ton drop mass. This study demonstrates that the seismic hammer generates a consistent acoustic source that could be used for in-situ sensor characterization, array evaluation and surface-air coupling studies for source characterization.

  18. Iceberg capsize hydrodynamics and the source of glacial earthquakes

    NASA Astrophysics Data System (ADS)

    Kaluzienski, Lynn; Burton, Justin; Cathles, Mac

    2014-03-01

    Accelerated warming in the past few decades has led to an increase in dramatic, singular mass loss events from the Greenland and Antarctic ice sheets, such as the catastrophic collapse of ice shelves on the western antarctic peninsula, and the calving and subsequent capsize of cubic-kilometer scale icebergs in Greenland's outlet glaciers. The latter has been identified as the source of long-period seismic events classified as glacial earthquakes, which occur most frequently in Greenland's summer months. The ability to partially monitor polar mass loss through the Global Seismographic Network is quite attractive, yet this goal necessitates an accurate model of a source mechanism for glacial earthquakes. In addition, the detailed relationship between iceberg mass, geometry, and the measured seismic signal is complicated by inherent difficulties in collecting field data from remote, ice-choked fjords. To address this, we use a laboratory scale model to measure aspects of the post-fracture calving process not observable in nature. Our results show that the combination of mechanical contact forces and hydrodynamic pressure forces generated by the capsize of an iceberg adjacent to a glacier's terminus produces a dipolar strain which is reminiscent of a single couple seismic source.

  19. Characterising large scenario earthquakes and their influence on NDSHA maps

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can therefore be the factor of two, intrinsic in MCS and other discrete scales. A simple test supports this hypothesis: an increase of 0.5 in the magnitude, i.e. one degrees in epicentral MCS, of all sources used in the national scale seismic zoning produces a doubling of the maximum ground motion. The analysis of uncertainty in ground motion maps, due to the catalogue random errors in magnitude and localization, shows a not uniform distribution of ground shaking uncertainty. The available information from catalogues of past events, that is not complete and may well not be representative of future earthquakes, can be substantially completed using independent indicators of the seismogenic potential of a given area, such as active faulting data and the seismogenic nodes.

  20. Travel-time source-specific station correction improves location accuracy

    NASA Astrophysics Data System (ADS)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  1. Updated Tomographic Seismic Imaging at Kilauea Volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Okubo, P.; Johnson, J.; Felts, E. S.; Flores, N.

    2013-12-01

    Improved and more detailed geophysical, geological, and geochemical observations and measurements at Kilauea, along with prolonged eruptions at its summit caldera and east rift zone, are encouraging more ambitious interpretation and modeling of volcanic processes over a range of temporal and spatial scales. We are updating three-dimensional models of seismic wave-speed distributions within Kilauea using local earthquake arrival time tomography to support waveform-based modeling of seismic source mechanisms. We start from a tomographic model derived from a combination of permanent seismic stations comprising the Hawaiian Volcano Observatory (HVO) seismographic network and a dense deployment of temporary stations in the Kilauea caldera region in 1996. Using P- and S-wave arrival times measured from the HVO network for local earthquakes from 1997 through 2012, we compute velocity models with the finite difference tomographic seismic imaging technique implemented by Benz and others (1996), and applied to numerous volcanoes including Kilauea. Particular impetus to our current modeling was derived from a focused effort to review seismicity occurring in Kilauea's summit caldera and adjoining regions in 2012. Our results reveal clear P-wave low-velocity features at and slightly below sea level beneath Kilauea's summit caldera, lying between Halemaumau Crater and the north-facing scarps that mark the southern caldera boundary. The results are also suggestive of changes in seismic velocity distributions between 1996 and 2012. One example of such a change is an apparent decrease in the size and southeastward extent, compared to the earlier model, of the low VP feature imaged with the more recent data. However, we recognize the distinct possibility that these changes are reflective of differences in earthquake and seismic station distributions in the respective datasets, and we need to further populate the more recent HVO seismicity catalogs to possibly address this concern. We also look forward to more complete implementation at HVO of seismic imaging techniques that use ambient seismic noise retrieved from continuous seismic recordings, and to using earthquake arrival times and ambient seismic noise jointly to tomographically image Kilauea.

  2. Seismic behavior of an Italian Renaissance Sanctuary: Damage assessment by numerical modelling

    NASA Astrophysics Data System (ADS)

    Clementi, Francesco; Nespeca, Andrea; Lenci, Stefano

    2016-12-01

    The paper deals with modelling and analysis of architectural heritage through the discussion of an illustrative case study: the Medieval Sanctuary of Sant'Agostino (Offida, Italy). Using the finite element technique, a 3D numerical model of the sanctuary is built, and then used to identify the main sources of the damages. The work shows that advanced numerical analyses could offer significant information for the understanding of the causes of existing damage and, more generally, on the seismic vulnerability.

  3. Petascale computation of multi-physics seismic simulations

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie; Duru, Kenneth C.

    2017-04-01

    Capturing the observed complexity of earthquake sources in concurrence with seismic wave propagation simulations is an inherently multi-scale, multi-physics problem. In this presentation, we present simulations of earthquake scenarios resolving high-detail dynamic rupture evolution and high frequency ground motion. The simulations combine a multitude of representations of model complexity; such as non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure to capture dynamic rupture behavior at the source; and seismic wave attenuation, 3D subsurface structure and bathymetry impacting seismic wave propagation. Performing such scenarios at the necessary spatio-temporal resolution requires highly optimized and massively parallel simulation tools which can efficiently exploit HPC facilities. Our up to multi-PetaFLOP simulations are performed with SeisSol (www.seissol.org), an open-source software package based on an ADER-Discontinuous Galerkin (DG) scheme solving the seismic wave equations in velocity-stress formulation in elastic, viscoelastic, and viscoplastic media with high-order accuracy in time and space. Our flux-based implementation of frictional failure remains free of spurious oscillations. Tetrahedral unstructured meshes allow for complicated model geometry. SeisSol has been optimized on all software levels, including: assembler-level DG kernels which obtain 50% peak performance on some of the largest supercomputers worldwide; an overlapping MPI-OpenMP parallelization shadowing the multiphysics computations; usage of local time stepping; parallel input and output schemes and direct interfaces to community standard data formats. All these factors enable aim to minimise the time-to-solution. The results presented highlight the fact that modern numerical methods and hardware-aware optimization for modern supercomputers are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis. Lastly, we will conclude with an outlook on future exascale ADER-DG solvers for seismological applications.

  4. Lunar seismic profiling experiment natural activity study

    NASA Technical Reports Server (NTRS)

    Duennebier, F. K.

    1976-01-01

    The Lunar Seismic Experiment Natural Activity Study has provided a unique opportunity to study the high frequency (4-20 Hz) portion to the seismic spectrum on the moon. The data obtained from the LSPE was studied to evaluate the origin and importance of the process that generates thermal moonquakes and the characteristics of the seismic scattering zone at the lunar surface. The detection of thermal moonquakes by the LSPE array made it possible to locate the sources of many events and determine that they are definitely not generated by astronaut activities but are the result of a natural process on the moon. The propagation of seismic waves in the near-surface layers was studied in a qualitative manner. In the absence of an adequate theoretical model for the propagation of seismic waves in the moon, it is not possible to assign a depth for the scattering layer. The LSPE data does define several parameters which must be satisfied by any model developed in the future.

  5. Interpretation of Microseismicity Observed From Surface and Borehole Seismic Arrays During Hydraulic Fracturing in Shale - Bedding Plane Slip Model

    NASA Astrophysics Data System (ADS)

    Stanek, F.; Jechumtalova, Z.; Eisner, L.

    2017-12-01

    We present a geomechanical model explaining microseismicity induced by hydraulic fracturing in shales developed from many datasets acquired with two most common types of seismic monitoring arrays, surface and dual-borehole arrays. The geomechanical model explains the observed source mechanisms and locations of induced events from two stimulated shale reservoirs. We observe shear dip-slip source mechanisms with nodal planes aligned with location trends. We show that such seismicity can be explained as a shearing along bedding planes caused by aseismic opening of vertical hydraulic fractures. The source mechanism inversion was applied only to selected high-quality events with sufficient signal-to-noise ratio. We inverted P- and P- and S-wave arrival amplitudes to full-moment tensor and decomposed it to shear, volumetric and compensated linear vector dipole components. We also tested an effect of noise presented in the data to evaluate reliability of non-shear components. The observed seismicity from both surface and downhole monitoring of shale stimulations is very similar. The locations of induced microseismic events are limited to narrow depth intervals and propagate along distinct trend(s) showing fracture propagation in direction of maximum horizontal stress from injection well(s). The source mechanisms have a small non-shear component which can be partly explained as an effect of noise in the data, i.e. events represent shearing on faults. We observe predominantly dip-slip events with a strike of the steeper (almost vertical) nodal plane parallel to the fracture propagation. Therefore the other possible nodal plane is almost horizontal. The rake angles of the observed mechanisms divide these dip-slips into two groups with opposite polarities. It means that we observe opposite movements on the nearly identically oriented faults. Realizing a typical structural weakness of shale in horizontal planes, we interpret observed microseismicity as a result of shearing along bedding planes caused by seismically silent (aseismic) vertical fracture opening.

  6. Kinematic Seismic Rupture Parameters from a Doppler Analysis

    NASA Astrophysics Data System (ADS)

    Caldeira, Bento; Bezzeghoud, Mourad; Borges, José F.

    2010-05-01

    The radiation emitted from extended seismic sources, mainly when the rupture spreads in preferred directions, presents spectral deviations as a function of the observation location. This aspect, unobserved to point sources, and named as directivity, are manifested by an increase in the frequency and amplitude of seismic waves when the rupture occurs in the direction of the seismic station and a decrease in the frequency and amplitude if it occurs in the opposite direction. The model of directivity that supports the method is a Doppler analysis based on a kinematic source model of rupture and wave propagation through a structural medium with spherical symmetry [1]. A unilateral rupture can be viewed as a sequence of shocks produced along certain paths on the fault. According this model, the seismic record at any point on the Earth's surface contains a signature of the rupture process that originated the recorded waveform. Calculating the rupture direction and velocity by a general Doppler equation, - the goal of this work - using a dataset of common time-delays read from waveforms recorded at different distances around the epicenter, requires the normalization of measures to a standard value of slowness. This normalization involves a non-linear inversion that we solve numerically using an iterative least-squares approach. The evaluation of the performance of this technique was done through a set of synthetic and real applications. We present the application of the method at four real case studies, the following earthquakes: Arequipa, Peru (Mw = 8.4, June 23, 2001); Denali, AK, USA (Mw = 7.8; November 3, 2002); Zemmouri-Boumerdes, Algeria (Mw = 6.8, May 21, 2003); and Sumatra, Indonesia (Mw = 9.3, December 26, 2004). The results obtained from the dataset of the four earthquakes agreed, in general, with the values presented by other authors using different methods and data. [1] Caldeira B., Bezzeghoud M, Borges JF, 2009; DIRDOP: a directivity approach to determining the seismic rupture velocity vector. J Seismology, DOI 10.1007/s10950-009-9183-x

  7. Spatial extent of a hydrothermal system at Kilauea Volcano, Hawaii, determined from array analyses of shallow long-period seismicity 2. Results

    USGS Publications Warehouse

    Almendros, J.; Chouet, B.; Dawson, P.

    2001-01-01

    Array data from a seismic experiment carried out at Kilauea Volcano, Hawaii, in February 1997, are analyzed by the frequency-slowness method. The slowness vectors are determined at each of three small-aperture seismic antennas for the first arrivals of 1129 long-period (LP) events and 147 samples of volcanic tremor. The source locations are determined by using a probabilistic method which compares the event azimuths and slownesses with a slowness vector model. The results show that all the LP seismicity, including both discrete LP events and tremor, was generated in the same source region along the east flank of the Halemaumau pit crater, demonstrating the strong relation that exists between the two types of activities. The dimensions of the source region are approximately 0.6 X 1.0 X 0.5 km. For LP events we are able to resolve at least three different clusters of events. The most active cluster is centered ???200 m northeast of Halemaumau at depths shallower than 200 m beneath the caldera floor. A second cluster is located beneath the northeast quadrant of Halemaumau at a depth of ???400 m. The third cluster is <200 m deep and extends southeastward from the northeast quadrant of Halemaumau. Only one source zone is resolved for tremor. This zone is coincident with the most active source zone of LP events, northeast of Halemaumau. The location, depth, and size of the source region suggest a hydrothermal origin for all the analyzed LP seismicity. Copyright 2001 by the American Geophysical Union.

  8. Very-long-period seismic signals at the Tatun Volcano Group, northern Taiwan

    NASA Astrophysics Data System (ADS)

    Lin, C. H.; Pu, H. C.

    2016-12-01

    Very-long-period (VLP) seismic events have been detected in the Tatun Volcano Group (TVG), located around the border between Taipei City and New Taipei City in northern Taiwan, in which has 7 million residents. By using both analyses of particle motions and travel-time delay, one VLP volcanic earthquake's source is estimated to be at a shallow depth ( 800 m) beneath Mt. Chihsin, which is the highest and youngest volcano in the TVG. The significant variation of seismic energy at different azimuths provides strong evidence to distinguish a crack source from other kinds of sources, such as a sphere, vertical pipe or even double-couple source. This is further confirmed by synthetic modeling of the seismograms recorded at two stations as well as the compressional first-motion at three seismic stations. Thus, a deeper plumping system with high-pressure fluid is required to generate the VLP signals and other volcanic earthquakes in the TVG. Combining this result with those of previous studies, we conclude that the TVG might not be totally extinct and some further investigations must be carried out to improve understanding of the volcanic characteristics of the TVG.

  9. A Model For Rapid Estimation of Economic Loss

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2012-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellors, R J

    The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impactmore » active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.« less

  11. Earthquake Source Parameter Estimates for the Charlevoix and Western Quebec Seismic Zones in Eastern Canada

    NASA Astrophysics Data System (ADS)

    Onwuemeka, J.; Liu, Y.; Harrington, R. M.; Peña-Castro, A. F.; Rodriguez Padilla, A. M.; Darbyshire, F. A.

    2017-12-01

    The Charlevoix Seismic Zone (CSZ), located in eastern Canada, experiences a high rate of intraplate earthquakes, hosting more than six M >6 events since the 17th century. The seismicity rate is similarly high in the Western Quebec seismic zone (WQSZ) where an MN 5.2 event was reported on May 17, 2013. A good understanding of seismicity and its relation to the St-Lawrence paleorift system requires information about event source properties, such as static stress drop and fault orientation (via focal mechanism solutions). In this study, we conduct a systematic estimate of event source parameters using 1) hypoDD to relocate event hypocenters, 2) spectral analysis to derive corner frequency, magnitude, and hence static stress drops, and 3) first arrival polarities to derive focal mechanism solutions of selected events. We use a combined dataset for 817 earthquakes cataloged between June 2012 and May 2017 from the Canadian National Seismograph Network (CNSN), and temporary deployments from the QM-III Earthscope FlexArray and McGill seismic networks. We first relocate 450 events using P and S-wave differential travel-times refined with waveform cross-correlation, and compute focal mechanism solutions for all events with impulsive P-wave arrivals at a minimum of 8 stations using the hybridMT moment tensor inversion algorithm. We then determine corner frequency and seismic moment values by fitting S-wave spectra on transverse components at all stations for all events. We choose the final corner frequency and moment values for each event using the median estimate at all stations. We use the corner frequency and moment estimates to calculate moment magnitudes, static stress-drop values and rupture radii, assuming a circular rupture model. We also investigate scaling relationships between parameters, directivity, and compute apparent source dimensions and source time functions of 15 M 2.4+ events from second-degree moment estimates. To the first-order, source dimension estimates from both methods generally agree. We observe higher corner frequencies and higher stress drops (ranging from 20 to 70 MPa) typical of intraplate seismicity in comparison with interplate seismicity. We follow similar approaches to studying 25 MN 3+ events reported in the WQSZ using data recorded by the CNSN and USArray Transportable Array.

  12. Documentation for the 2014 update of the United States national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter M.; Mueller, Charles S.; Haller, Kathleen M.; Frankel, Arthur D.; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen C.; Boyd, Oliver S.; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nico; Wheeler, Russell L.; Williams, Robert A.; Olsen, Anna H.

    2014-01-01

    The national seismic hazard maps for the conterminous United States have been updated to account for new methods, models, and data that have been obtained since the 2008 maps were released (Petersen and others, 2008). The input models are improved from those implemented in 2008 by using new ground motion models that have incorporated about twice as many earthquake strong ground shaking data and by incorporating many additional scientific studies that indicate broader ranges of earthquake source and ground motion models. These time-independent maps are shown for 2-percent and 10-percent probability of exceedance in 50 years for peak horizontal ground acceleration as well as 5-hertz and 1-hertz spectral accelerations with 5-percent damping on a uniform firm rock site condition (760 meters per second shear wave velocity in the upper 30 m, VS30). In this report, the 2014 updated maps are compared with the 2008 version of the maps and indicate changes of plus or minus 20 percent over wide areas, with larger changes locally, caused by the modifications to the seismic source and ground motion inputs.

  13. Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.

    2017-12-01

    Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new pseudo-dynamic rupture modeling approach for computing broadband ground-motion time-histories for simulation-based PSHA

  14. Utilizing the R/V Marcus G. Langseth's streamer to measure the acoustic radiation of its seismic source in the shallow waters of New Jersey's continental shelf.

    PubMed

    Crone, Timothy J; Tolstoy, Maya; Gibson, James C; Mountain, Gregory

    2017-01-01

    Shallow water marine seismic surveys are necessary to understand a range of Earth processes in coastal environments, including those that represent major hazards to society such as earthquakes, tsunamis, and sea-level rise. Predicting the acoustic radiation of seismic sources in shallow water, which is required for compliance with regulations designed to limit impacts on protected marine species, is a significant challenge in this environment because of variable reflectivity due to local geology, and the susceptibility of relatively small bathymetric features to focus or shadow acoustic energy. We use data from the R/V Marcus G. Langseth's towed hydrophone streamer to estimate the acoustic radiation of the ship's seismic source during a large survey of the shallow shelf off the coast of New Jersey. We use the results to estimate the distances from the source to acoustic levels of regulatory significance, and use bathymetric data from the ship's multibeam system to explore the relationships between seafloor depth and slope and the measured acoustic radiation patterns. We demonstrate that existing models significantly overestimate mitigation radii, but that the variability of received levels in shallow water suggest that in situ real-time measurements would help improve these estimates, and that post-cruise revisions of received levels are valuable in accurately determining the potential acoustic impact of a seismic survey.

  15. Seismic Acoustic Ratio Estimates Using a Moving Vehicle Source

    DTIC Science & Technology

    1999-08-01

    airwave coupling. Thus, it is likely that the high SAR values are due to acoustic to seismic coupling in a shallow air filled poroelastic layer (e.g...Sabatier et al., 1986b). More complex models for the earth, such as incorporating layering and poroelastic material (e.g., Albert, 1993; Attenborough...groundwater and bedrock in an area .of discontinuous permafrost,” Geophysics 63(5), 1573-1584. Attenborough, K. (1985). “ Acoustical impedance models for

  16. How sensitive is earthquake ground motion to source parameters? Insights from a numerical study in the Mygdonian basin

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; deMartin, Florent; Hollender, Fabrice; Guyonnet-Benaize, Cédric; Manakou, Maria; Savvaidis, Alexandros; Kiratzi, Anastasia; Roumelioti, Zaferia; Theodoulidis, Nikos

    2014-05-01

    Understanding the origin of the variability of earthquake ground motion is critical for seismic hazard assessment. Here we present the results of a numerical analysis of the sensitivity of earthquake ground motion to seismic source parameters, focusing on the Mygdonian basin near Thessaloniki (Greece). We use an extended model of the basin (65 km [EW] x 50 km [NS]) which has been elaborated during the Euroseistest Verification and Validation Project. The numerical simulations are performed with two independent codes, both implementing the Spectral Element Method. They rely on a robust, semi-automated, mesh design strategy together with a simple homogenization procedure to define a smooth velocity model of the basin. Our simulations are accurate up to 4 Hz, and include the effects of surface topography and of intrinsic attenuation. Two kinds of simulations are performed: (1) direct simulations of the surface ground motion for real regional events having various back azimuth with respect to the center of the basin; (2) reciprocity-based calculations where the ground motion due to 980 different seismic sources is computed at a few stations in the basin. In the reciprocity-based calculations, we consider epicentral distances varying from 2.5 km to 40 km, source depths from 1 km to 15 km and we span the range of possible back-azimuths with a 10 degree bin. We will present some results showing (1) the sensitivity of ground motion parameters to the location and focal mechanism of the seismic sources; and (2) the variability of the amplification caused by site effects, as measured by standard spectral ratios, to the source characteristics

  17. Extension of Characterized Source Model for Broadband Strong Ground Motion Simulations (0.1-50s) of M9 Earthquake

    NASA Astrophysics Data System (ADS)

    Asano, K.; Iwata, T.

    2014-12-01

    After the 2011 Tohoku earthquake in Japan (Mw9.0), many papers on the source model of this mega subduction earthquake have been published. From our study on the modeling of strong motion waveforms in the period 0.1-10s, four isolated strong motion generation areas (SMGAs) were identified in the area deeper than 25 km (Asano and Iwata, 2012). The locations of these SMGAs were found to correspond to the asperities of M7-class events in 1930's. However, many studies on kinematic rupture modeling using seismic, geodetic and tsunami data revealed that the existence of the large slip area from the trench to the hypocenter (e.g., Fujii et al., 2011; Koketsu et al., 2011; Shao et al., 2011; Suzuki et al., 2011). That is, the excitation of seismic wave is spatially different in long and short period ranges as is already discussed by Lay et al.(2012) and related studies. The Tohoku earthquake raised a new issue we have to solve on the relationship between the strong motion generation and the fault rupture process, and it is an important issue to advance the source modeling for future strong motion prediction. The previous our source model consists of four SMGAs, and observed ground motions in the period range 0.1-10s are explained well by this source model. We tried to extend our source model to explain the observed ground motions in wider period range with a simple assumption referring to the previous our study and the concept of the characterized source model (Irikura and Miyake, 2001, 2011). We obtained a characterized source model, which have four SMGAs in the deep part, one large slip area in the shallow part and background area with low slip. The seismic moment of this source model is equivalent to Mw9.0. The strong ground motions are simulated by the empirical Green's function method (Irikura, 1986). Though the longest period limit is restricted by the SN ratio of the EGF event (Mw~6.0) records, this new source model succeeded to reproduce the observed waveforms and Fourier amplitude spectra in the period range 0.1-50s. The location of this large slip area seems to overlap the source regions of historical events in 1793 and 1897 off Sanriku area. We think the source model for strong motion prediction of Mw9 event could be constructed by the combination of hierarchical multiple asperities or source patches related to histrorical events in this region.

  18. Coupling Hydrodynamic and Wave Propagation Codes for Modeling of Seismic Waves recorded at the SPE Test.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Rougier, E.; Delorey, A.; Steedman, D. W.; Bradley, C. R.

    2016-12-01

    The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. For this, the SPE program includes a strong modeling effort based on first principles calculations with the challenge to capture both the source and near-source processes and those taking place later in time as seismic waves propagate within complex 3D geologic environments. In this paper, we report on results of modeling that uses hydrodynamic simulation codes (Abaqus and CASH) coupled with a 3D full waveform propagation code, SPECFEM3D. For modeling the near source region, we employ a fully-coupled Euler-Lagrange (CEL) modeling capability with a new continuum-based visco-plastic fracture model for simulation of damage processes, called AZ_Frac. These capabilities produce high-fidelity models of various factors believed to be key in the generation of seismic waves: the explosion dynamics, a weak grout-filled borehole, the surrounding jointed rock, and damage creation and deformations happening around the source and the free surface. SPECFEM3D, based on the Spectral Element Method (SEM) is a direct numerical method for full wave modeling with mathematical accuracy. The coupling interface consists of a series of grid points of the SEM mesh situated inside of the hydrodynamic code's domain. Displacement time series at these points are computed using output data from CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests with the Sharpe's model and comparisons of waveforms modeled with Rg waves (2-8Hz) that were recorded up to 2 km for SPE. We especially show effects of the local topography, velocity structure and spallation. Our models predict smaller amplitudes of Rg waves for the first five SPE shots compared to pure elastic models such as Denny &Johnson (1991).

  19. Source Parameters Inversion of the 2012 LA VEGA Colombia mw 7.2 Earthquake Using Near-Regional Waveform Data

    NASA Astrophysics Data System (ADS)

    Pedraza, P.; Poveda, E.; Blanco Chia, J. F.; Zahradnik, J.

    2013-05-01

    On September 30th, 2012, an earthquake of magnitude Mw 7.2 occurred at the depth of ~170km in the southeast of Colombia. This seismic event is associated to the Nazca plate drifting eastward relative the South America plate. The distribution of seismicity obtained by the National Seismological Network of Colombia (RSNC) since 1993 shows a segmented subduction zone with varying dip angles. The earthquake occurred in a seismic gap zone of intermediate depth. The recent deployment of broadband seismic stations on the Colombian, as a part of the Colombian Seismological Network, operated by the Colombian Survey, has provided high-quality data to study rupture process. We estimated the moment tensor, the centroid position, and the source time function. The parameters were obtained by inverting waveforms recorded by RSNC at distances 100 km to 800 km, and modeled at 0.01-0.09Hz, using different 1D crustal models, taking advantage of the ISOLA code. The DC-percentage of the earthquake is very high (~90%). The focal mechanism is mostly normal, hence the determination of the fault plane is challenging. An attempt to determine the fault plane was made based on mutual relative position of the centroid and hypocenter (H-C method). Studies in progress are devoted to searching possible complexity of the fault rupture process (total duration of about 15 seconds), quantified by multiple-point source models.

  20. Probabilistic Seismic Risk Model for Western Balkans

    NASA Astrophysics Data System (ADS)

    Stejskal, Vladimir; Lorenzo, Francisco; Pousse, Guillaume; Radovanovic, Slavica; Pekevski, Lazo; Dojcinovski, Dragi; Lokin, Petar; Petronijevic, Mira; Sipka, Vesna

    2010-05-01

    A probabilistic seismic risk model for insurance and reinsurance purposes is presented for an area of Western Balkans, covering former Yugoslavia and Albania. This territory experienced many severe earthquakes during past centuries producing significant damage to many population centres in the region. The highest hazard is related to external Dinarides, namely to the collision zone of the Adriatic plate. The model is based on a unified catalogue for the region and a seismic source model consisting of more than 30 zones covering all the three main structural units - Southern Alps, Dinarides and the south-western margin of the Pannonian Basin. A probabilistic methodology using Monte Carlo simulation was applied to generate the hazard component of the model. Unique set of damage functions based on both loss experience and engineering assessments is used to convert the modelled ground motion severity into the monetary loss.

  1. Using Socioeconomic Data to Calibrate Loss Estimates

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2013-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  2. Toward Building a New Seismic Hazard Model for Mainland China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.

    2015-12-01

    At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.

  3. Source characterization and dynamic fault modeling of induced seismicity

    NASA Astrophysics Data System (ADS)

    Lui, S. K. Y.; Young, R. P.

    2017-12-01

    In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.

  4. Slab1.0: A three-dimensional model of global subduction zone geometries

    NASA Astrophysics Data System (ADS)

    Hayes, Gavin P.; Wald, David J.; Johnson, Rebecca L.

    2012-01-01

    We describe and present a new model of global subduction zone geometries, called Slab1.0. An extension of previous efforts to constrain the two-dimensional non-planar geometry of subduction zones around the focus of large earthquakes, Slab1.0 describes the detailed, non-planar, three-dimensional geometry of approximately 85% of subduction zones worldwide. While the model focuses on the detailed form of each slab from their trenches through the seismogenic zone, where it combines data sets from active source and passive seismology, it also continues to the limits of their seismic extent in the upper-mid mantle, providing a uniform approach to the definition of the entire seismically active slab geometry. Examples are shown for two well-constrained global locations; models for many other regions are available and can be freely downloaded in several formats from our new Slab1.0 website, http://on.doi.gov/d9ARbS. We describe improvements in our two-dimensional geometry constraint inversion, including the use of `average' active source seismic data profiles in the shallow trench regions where data are otherwise lacking, derived from the interpolation between other active source seismic data along-strike in the same subduction zone. We include several analyses of the uncertainty and robustness of our three-dimensional interpolation methods. In addition, we use the filtered, subduction-related earthquake data sets compiled to build Slab1.0 in a reassessment of previous analyses of the deep limit of the thrust interface seismogenic zone for all subduction zones included in our global model thus far, concluding that the width of these seismogenic zones is on average 30% larger than previous studies have suggested.

  5. Characteristic Analysis of Air-gun Source Wavelet based on the Vertical Cable Data

    NASA Astrophysics Data System (ADS)

    Xing, L.

    2016-12-01

    Air guns are important sources for marine seismic exploration. Far-field wavelets of air gun arrays, as a necessary parameter for pre-stack processing and source models, plays an important role during marine seismic data processing and interpretation. When an air gun fires, it generates a series of air bubbles. Similar to onshore seismic exploration, the water forms a plastic fluid near the bubble; the farther the air gun is located from the measurement, the more steady and more accurately represented the wavelet will be. In practice, hydrophones should be placed more than 100 m from the air gun; however, traditional seismic cables cannot meet this requirement. On the other hand, vertical cables provide a viable solution to this problem. This study uses a vertical cable to receive wavelets from 38 air guns and data are collected offshore Southeast Qiong, where the water depth is over 1000 m. In this study, the wavelets measured using this technique coincide very well with the simulated wavelets and can therefore represent the real shape of the wavelets. This experiment fills a technology gap in China.

  6. Analysis of seismic sources for different mechanisms of fracture growth for microseismic monitoring applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchkov, A. A., E-mail: DuchkovAA@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk, 630090; Stefanov, Yu. P., E-mail: stefanov@ispms.tsc.ru

    2015-10-27

    We have developed and illustrated an approach for geomechanic modeling of elastic wave generation (microsiesmic event occurrence) during incremental fracture growth. We then derived properties of effective point seismic sources (radiation patterns) approximating obtained wavefields. These results establish connection between geomechanic models of hydraulic fracturing and microseismic monitoring. Thus, the results of the moment tensor inversion of microseismic data can be related to different geomechanic scenarios of hydraulic fracture growth. In future, the results can be used for calibrating hydrofrac models. We carried out a series of numerical simulations and made some observations about wave generation during fracture growth. Inmore » particular when the growing fracture hits pre-existing crack then it generates much stronger microseismic event compared to fracture growth in homogeneous medium (radiation pattern is very close to the theoretical dipole-type source mechanism)« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreger, Douglas S.; Ford, Sean R.; Walter, William R.

    Research was carried out investigating the feasibility of using a regional distance seismic waveform moment tensor inverse procedure to estimate source parameters of nuclear explosions and to use the source inversion results to develop a source-type discrimination capability. The results of the research indicate that it is possible to robustly determine the seismic moment tensor of nuclear explosions, and when compared to natural seismicity in the context of the a Hudson et al. (1989) source-type diagram they are found to separate from populations of earthquakes and underground cavity collapse seismic sources.

  8. Attenuation of ground-motion spectral amplitudes in southeastern Australia

    USGS Publications Warehouse

    Allen, T.I.; Cummins, P.R.; Dhu, T.; Schneider, J.F.

    2007-01-01

    A dataset comprising some 1200 weak- and strong-motion records from 84 earthquakes is compiled to develop a regional ground-motion model for southeastern Australia (SEA). Events were recorded from 1993 to 2004 and range in size from moment magnitude 2.0 ??? M ??? 4.7. The decay of vertical-component Fourier spectral amplitudes is modeled by trilinear geometrical spreading. The decay of low-frequency spectral amplitudes can be approximated by the coefficient of R-1.3 (where R is hypocentral distance) within 90 km of the seismic source. From approximately 90 to 160 km, we observe a transition zone in which the seismic coda are affected by postcritical reflections from midcrustal and Moho discontinuities. In this hypocentral distance range, geometrical spreading is approximately R+0.1. Beyond 160 km, low-frequency seismic energy attenuates rapidly with source-receiver distance, having a geometrical spreading coefficient of R-1.6. The associated regional seismic-quality factor can be expressed by the polynomial: log Q(f) = 3.66 - 1.44 log f + 0.768 (log f)2 + 0.058 (log f)3 for frequencies 0.78 ??? f ??? 19.9 Hz. Fourier spectral amplitudes, corrected for geometrical spreading and anelastic attenuation, are regressed with M to obtain quadratic source scaling coefficients. Modeled vertical-component displacement spectra fit the observed data well. Amplitude residuals are, on average, relatively small and do not vary with hypocentral distance. Predicted source spectra (i.e., at R = 1 km) are consistent with eastern North American (ENA) Models at low frequencies (f less than approximately 2 Hz) indicating that moment magnitudes calculated for SEA earthquakes are consistent with moment magnitude scales used in ENA over the observed magnitude range. The models presented represent the first spectral ground-motion prediction equations develooed for the southeastern Australian region. This work provides a useful framework for the development of regional ground-motion relations for earthquake hazard and risk assessment in SEA.

  9. Forecasting of future earthquakes in the northeast region of India considering energy released concept

    NASA Astrophysics Data System (ADS)

    Zarola, Amit; Sil, Arjun

    2018-04-01

    This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.

  10. Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.

    2012-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or historical earthquake source area is also applicable for the construction of SMGA settings for strong ground motion prediction for future earthquakes.

  11. Near-Fault Broadband Ground Motion Simulations Using Empirical Green's Functions: Application to the Upper Rhine Graben (France-Germany) Case Study

    NASA Astrophysics Data System (ADS)

    Del Gaudio, Sergio; Hok, Sebastien; Festa, Gaetano; Causse, Mathieu; Lancieri, Maria

    2017-09-01

    Seismic hazard estimation relies classically on data-based ground motion prediction equations (GMPEs) giving the expected motion level as a function of several parameters characterizing the source and the sites of interest. However, records of moderate to large earthquakes at short distances from the faults are still rare. For this reason, it is difficult to obtain a reliable ground motion prediction for such a class of events and distances where also the largest amount of damage is usually observed. A possible strategy to fill this lack of information is to generate synthetic accelerograms based on an accurate modeling of both extended fault rupture and wave propagation process. The development of such modeling strategies is essential for estimating seismic hazard close to faults in moderate seismic activity zones, where data are even scarcer. For that reason, we selected a target site in Upper Rhine Graben (URG), at the French-German border. URG is a region where faults producing micro-seismic activity are very close to the sites of interest (e.g., critical infrastructures like supply lines, nuclear power plants, etc.) needing a careful investigation of seismic hazard. In this work, we demonstrate the feasibility of performing near-fault broadband ground motion numerical simulations in a moderate seismic activity region such as URG and discuss some of the challenges related to such an application. The modeling strategy is to couple the multi-empirical Green's function technique (multi-EGFt) with a k -2 kinematic source model. One of the advantages of the multi-EGFt is that it does not require a detailed knowledge of the propagation medium since the records of small events are used as the medium transfer function, if, at the target site, records of small earthquakes located on the target fault are available. The selection of suitable events to be used as multi-EGF is detailed and discussed in our specific situation where less number of events are available. We then showed the impact that each source parameter characterizing the k-2 model has on ground motion amplitude. Finally we performed ground motion simulations showing results for different probable earthquake scenarios in the URG. Dependency of ground motions and of their variability are analyzed at different frequencies in respect of rupture velocity, roughness degree of slip distribution (stress drop), and hypocenter location. In near-source conditions, ground motion variability is shown to be mostly governed by the uncertainty on source parameters. In our specific configuration (magnitude, distance), the directivity effect is only observed in a limited frequency range. Rather, broadband ground motions are shown to be sensitive to both average rupture velocity and its possible variability, and to slip roughness. Ending up with a comparison of simulation results and GMPEs, we conclude that source parameters and their variability should be set up carefully to obtain reliable broadband ground motion estimations. In particular, our study shows that slip roughness should be set up in respect of the target stress drop. This entails the need for a better understanding of the physics of earthquake source and its incorporation in the ground motion modeling.

  12. Moment Inversion of the DPRK Nuclear Tests Using Finite-Difference Three-dimensional Strain Green's Tensors

    NASA Astrophysics Data System (ADS)

    Bao, X.; Shen, Y.; Wang, N.

    2017-12-01

    Accurate estimation of the source moment is important for discriminating underground explosions from earthquakes and other seismic sources. In this study, we invert for the full moment tensors of the recent seismic events (since 2016) at the Democratic People's Republic of Korea (PRRK) Punggye-ri test site. We use waveform data from broadband seismic stations located in China, Korea, and Japan in the inversion. Using a non-staggered-grid, finite-difference algorithm, we calculate the strain Green's tensors (SGT) based on one-dimensional (1D) and three-dimensional (3D) Earth models. Taking advantage of the source-receiver reciprocity, a SGT database pre-calculated and stored for the Punggye-ri test site is used in inversion for the source mechanism of each event. With the source locations estimated from cross-correlation using regional Pn and Pn-coda waveforms, we obtain the optimal source mechanism that best fits synthetics to the observed waveforms of both body and surface waves. The moment solutions of the first three events (2016-01-06, 2016-09-09, and 2017-09-03) show dominant isotropic components, as expected from explosions, though there are also notable non-isotropic components. The last event ( 8 minutes after the mb6.3 explosion in 2017) contained mainly implosive component, suggesting a collapse following the explosion. The solutions from the 3D model can better fit observed waveforms than the corresponding solutions from the 1D model. The uncertainty in the resulting moment solution is influenced by heterogeneities not resolved by the Earth model according to the waveform misfit. Using the moment solutions, we predict the peak ground acceleration at the Punggye-ri test site and compare the prediction with corresponding InSAR and other satellite images.

  13. Modeling propagation of infrasound signals observed by a dense seismic network.

    PubMed

    Chunchuzov, I; Kulichkov, S; Popov, O; Hedlin, M

    2014-01-01

    The long-range propagation of infrasound from a surface explosion with an explosive yield of about 17.6 t TNT that occurred on June 16, 2008 at the Utah Test and Training Range (UTTR) in the western United States is simulated using an atmospheric model that includes fine-scale layered structure of the wind velocity and temperature fields. Synthetic signal parameters (waveforms, amplitudes, and travel times) are calculated using parabolic equation and ray-tracing methods for a number of ranges between 100 and 800 km from the source. The simulation shows the evolution of several branches of stratospheric and thermospheric signals with increasing range from the source. Infrasound signals calculated using a G2S (ground-to-space) atmospheric model perturbed by small-scale layered wind velocity and temperature fluctuations are shown to agree well with recordings made by the dense High Lava Plains seismic network located at an azimuth of 300° from UTTR. The waveforms of calculated infrasound arrivals are compared with those of seismic recordings. This study illustrates the utility of dense seismic networks for mapping an infrasound field with high spatial resolution. The parabolic equation calculations capture both the effect of scattering of infrasound into geometric acoustic shadow zones and significant temporal broadening of the arrivals.

  14. A Quantitative Evaluation of SCEC Community Velocity Model Version 3.0

    NASA Astrophysics Data System (ADS)

    Chen, P.; Zhao, L.; Jordan, T. H.

    2003-12-01

    We present a systematic methodology for evaluating and improving 3D seismic velocity models using broadband waveform data from regional earthquakes. The operator that maps a synthetic waveform into an observed waveform is expressed in the Rytov form D(ω ) = {exp}[{i} ω δ τ {p}(ω ) - ω δ τ {q}(ω )]. We measure the phase delay time δ τ p(ω ) and the amplitude reduction time δ τ q(ω ) as a function of frequency ω using Gee & Jordan's [1992] isolation-filter technique, and we correct the data for frequency-dependent interference and frequency-independent source statics. We have applied this procedure to a set of small events in Southern California. Synthetic seismograms were computed using three types of velocity models: the 1D Standard Southern California Crustal Model (SoCaL) [Dreger & Helmberger, 1993], the 3D SCEC Community Velocity Model, Version 3.0 (CVM3.0) [Magistrale et al., 2000], and a set of path-averaged 1D models (A1D) extracted from CVM3.0 by horizontally averaging wave slownesses along source-receiver paths. The 3D synthetics were computed using K. Olsen's finite difference code. More than 1000 measurements were made on both P and S waveforms at frequencies ranging from 0.2 to 1 Hz. Overall, the 3D model provided a substantially better fit to the waveform data than either laterally homogeneous or path-dependent 1D models. Relative to SoCaL, CVM3.0 provided a variance reduction of about 64% in δ τ p, and 41% in δ τ q. Relative to A1D, the variance reduction is about 46% and 20%, respectively. The same set of measurements can be employed to invert for both seismic source properties and seismic velocity structures. Fully numerical methods are being developed to compute the Fréchet kernels for these measurements [L. Zhao et. al., this meeting]. This methodology thus provides a unified framework for regional studies of seismic sources and Earth structure in Southern California and elsewhere.

  15. Surface wave tomography of the European crust and upper mantle from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    LU, Y.; Stehly, L.; Paul, A.

    2017-12-01

    We present a high-resolution 3-D Shear wave velocity model of the European crust and upper mantle derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous vertical-component seismic recordings from 1293 broadband stations across Europe (10W-35E, 30N-75N). We analyze group velocity dispersion from 5s to 150s for cross-correlations of more than 0.8 million virtual source-receiver pairs. 2-D group velocity maps are estimated using adaptive parameterization to accommodate the strong heterogeneity of path coverage. 3-D velocity model is obtained by merging 1-D models inverted at each pixel through a two-step data-driven inversion algorithm: a non-linear Bayesian Monte Carlo inversion, followed by a linearized inversion. Resulting S-wave velocity model and Moho depth are compared with previous geophysical studies: 1) The crustal model and Moho depth show striking agreement with active seismic imaging results. Moreover, it even provides new valuable information such as a strong difference of the European Moho along two seismic profiles in the Western Alps (Cifalps and ECORS-CROP). 2) The upper mantle model displays strong similarities with published models even at 150km deep, which is usually imaged using earthquake records.

  16. Geophysical remote sensing of water reservoirs suitable for desalinization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldridge, David Franklin; Bartel, Lewis Clark; Bonal, Nedra

    2009-12-01

    In many parts of the United States, as well as other regions of the world, competing demands for fresh water or water suitable for desalination are outstripping sustainable supplies. In these areas, new water supplies are necessary to sustain economic development and agricultural uses, as well as support expanding populations, particularly in the Southwestern United States. Increasing the supply of water will more than likely come through desalinization of water reservoirs that are not suitable for present use. Surface-deployed seismic and electromagnetic (EM) methods have the potential for addressing these critical issues within large volumes of an aquifer at amore » lower cost than drilling and sampling. However, for detailed analysis of the water quality, some sampling utilizing boreholes would be required with geophysical methods being employed to extrapolate these sampled results to non-sampled regions of the aquifer. The research in this report addresses using seismic and EM methods in two complimentary ways to aid in the identification of water reservoirs that are suitable for desalinization. The first method uses the seismic data to constrain the earth structure so that detailed EM modeling can estimate the pore water conductivity, and hence the salinity. The second method utilizes the coupling of seismic and EM waves through the seismo-electric (conversion of seismic energy to electrical energy) and the electro-seismic (conversion of electrical energy to seismic energy) to estimate the salinity of the target aquifer. Analytic 1D solutions to coupled pressure and electric wave propagation demonstrate the types of waves one expects when using a seismic or electric source. A 2D seismo-electric/electro-seismic is developed to demonstrate the coupled seismic and EM system. For finite-difference modeling, the seismic and EM wave propagation algorithms are on different spatial and temporal scales. We present a method to solve multiple, finite-difference physics problems that has application beyond the present use. A limited field experiment was conducted to assess the seismo-electric effect. Due to a variety of problems, the observation of the electric field due to a seismic source is not definitive.« less

  17. The MOON micro-seismic noise : first estimates from meteorites flux simulations

    NASA Astrophysics Data System (ADS)

    Lognonne, P.; Lefeuvre, M.; Johnson, C.; Weber, R.

    2008-12-01

    The Moon is considered to be a seismically quiet planet and most of the time, the Apollo seismograms were flat when not quakes was occuring. We show in this paper that this might not be the case if more sensitive data are recorded by future instruments and that a permanent micro-seismic noise is existing due to the continuous impacts of meteorites. We perform a modeling of this noise by using, as calibrated seismic data, those generated by the impacts of the Apollo S4B or LEM, by taking care on the scaling law, necessary to express the seismic force with respect to the mass and velocity of the impactors. We also parametrize the dependence of the amplitude of the seismic coda, associated to the maximum amplitude of the seismograms, with respect to the epicentral distance and to the source geometry. This enabling us to use the seismic data of the S4B impacts as empirical waveforms for the modeling of the natural impacts. The frequency/size law of meteoroids impacting the Moon and the associated probability of NEO impacts are however not known precisely. Uncertainties as large as a factor of 3-5 remain, especially for the moderate-sized impacts which are not observed on the Earth, due to the shielding by the atmosphere. We therefore use several meteoroid mass/frequency laws from the literature to generate, with a random simulator, a history of impacts on the Moon during a given period. The seismic signals generated by succession of seismic sources and estimate the frequency/amplitude relationship of such seismic signals. Our results finally provide an estimate for the meteoritic seismic background on the Moon. This background noise was not recorded by the Apollo seismic experiment due insufficient resolution. Such an estimate can be used in designing a new generation of lunar seismometers, for estimating the probability of detecting proposed impacts due to nuggets of strange quark matter , and to inform future lunar based experiments, which require very stable ground, such as optical interferometry moon-based telescopes or gravity waves detectors.

  18. Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

    NASA Astrophysics Data System (ADS)

    Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Bao, Jie; Swiler, Laura

    2017-12-01

    In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated - reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.

  19. Comparison of Amplitudes and Frequencies of Explosive vs. Hammer Seismic Sources for a 1-km Seismic Line in West Texas

    NASA Astrophysics Data System (ADS)

    Kaip, G.; Harder, S. H.; Karplus, M. S.; Vennemann, A.

    2016-12-01

    In May 2016, the National Seismic Source Facility (NSSF) located at the University of Texas at El Paso (UTEP) Department of Geological Sciences collected seismic data at the Indio Ranch located 30 km southwest of Van Horn, Texas. Both hammer on an aluminum plate and explosive sources were used. The project objective was to image subsurface structures at the ranch, owned by UTEP. Selecting the appropriate seismic source is important to reach project objectives. We compare seismic sources between explosions and hammer on plate, focusing on amplitude and frequency. The seismic line was 1 km long, trending WSW to ENE, with 200 4.5 Hz geophones at 5m spacing and shot locations at 10m spacing. Clay slurry was used in shot holes to increase shot coupling around booster. Trojan Spartan cast boosters (150g) were used in explosive sources in each shot hole (1 hole per station). The end of line shots had 5 shot holes instead of 1 (750g total). The hammer source utilized a 5.5 kg hammer and an aluminum plate. Five hammer blows were stacked at each location to improve signal-to-noise ratio. Explosive sources yield higher amplitude, but lower frequency content. The explosions exhibit a higher signal-to-noise ratio, allowing us to recognize seismic energy deeper and farther from the source. Hammer sources yield higher frequencies, allowing better resolution at shallower depths but have a lower signal-to-noise ratio and lower amplitudes, even with source stacking. We analyze the details of the shot spectra from the different types of sources. A combination of source types can improve data resolution and amplitude, thereby improving imaging potential. However, cost, logistics, and complexities also have a large influence on source selection.

  20. Reducing the uncertainty in the fidelity of seismic imaging results

    NASA Astrophysics Data System (ADS)

    Zhou, H. W.; Zou, Z.

    2017-12-01

    A key aspect in geoscientific inversion is quantifying the quality of the results. In seismic imaging, we must quantify the uncertainty of every imaging result based on field data, because data noise and methodology limitations may produce artifacts. Detection of artifacts is therefore an important aspect in uncertainty quantification in geoscientific inversion. Quantifying the uncertainty of seismic imaging solutions means assessing their fidelity, which defines the truthfulness of the imaged targets in terms of their resolution, position error and artifact. Key challenges to achieving the fidelity of seismic imaging include: (1) Difficulty to tell signal from artifact and noise; (2) Limitations in signal-to-noise ratio and seismic illumination; and (3) The multi-scale nature of the data space and model space. Most seismic imaging studies of the Earth's crust and mantle have employed inversion or modeling approaches. Though they are in opposite directions of mapping between the data space and model space, both inversion and modeling seek the best model to minimize the misfit in the data space, which unfortunately is not the output space. The fact that the selection and uncertainty of the output model are not judged in the output space has exacerbated the nonuniqueness problem for inversion and modeling. In contrast, the practice in exploration seismology has long established a two-fold approach of seismic imaging: Using velocity modeling building to establish the long-wavelength reference velocity models, and using seismic migration to map the short-wavelength reflectivity structures. Most interestingly, seismic migration maps the data into an output space called imaging space, where the output reflection images of the subsurface are formed based on an imaging condition. A good example is the reverse time migration, which seeks the reflectivity image as the best fit in the image space between the extrapolation of time-reversed waveform data and the prediction based on estimated velocity model and source parameters. I will illustrate the benefits of deciding the best output result in the output space for inversion, using examples from seismic imaging.

  1. Simultaneous teleseismic and geodetic observations of the stick-slip motion of an Antarctic ice stream.

    PubMed

    Wiens, Douglas A; Anandakrishnan, Sridhar; Winberry, J Paul; King, Matt A

    2008-06-05

    Long-period seismic sources associated with glacier motion have been recently discovered, and an increase in ice flow over the past decade has been suggested on the basis of secular changes in such measurements. Their significance, however, remains uncertain, as a relationship to ice flow has not been confirmed by direct observation. Here we combine long-period surface-wave observations with simultaneous Global Positioning System measurements of ice displacement to study the tidally modulated stick-slip motion of the Whillans Ice Stream in West Antarctica. The seismic origin time corresponds to slip nucleation at a region of the bed of the Whillans Ice Stream that is likely stronger than in surrounding regions and, thus, acts like an 'asperity' in traditional fault models. In addition to the initial pulse, two seismic arrivals occurring 10-23 minutes later represent stopping phases as the slip terminates at the ice stream edge and the grounding line. Seismic amplitude and average rupture velocity are correlated with tidal amplitude for the different slip events during the spring-to-neap tidal cycle. Although the total seismic moment calculated from ice rigidity, slip displacement, and rupture area is equivalent to an earthquake of moment magnitude seven (M(w) 7), seismic amplitudes are modest (M(s) 3.6-4.2), owing to the source duration of 20-30 minutes. Seismic radiation from ice movement is proportional to the derivative of the moment rate function at periods of 25-100 seconds and very long-period radiation is not detected, owing to the source geometry. Long-period seismic waves are thus useful for detecting and studying sudden ice movements but are insensitive to the total amount of slip.

  2. Towards an Empirically Based Parametric Explosion Spectral Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, S R; Walter, W R; Ruppert, S

    2009-08-31

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any priormore » explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.« less

  3. High-fidelity simulation capability for virtual testing of seismic and acoustic sensors

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.

    2005-05-01

    This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.

  4. Seismic Hazard Assessment at Esfaraen‒Bojnurd Railway, North‒East of Iran

    NASA Astrophysics Data System (ADS)

    Haerifard, S.; Jarahi, H.; Pourkermani, M.; Almasian, M.

    2018-01-01

    The objective of this study is to evaluate the seismic hazard at the Esfarayen-Bojnurd railway using the probabilistic seismic hazard assessment (PSHA) method. This method was carried out based on a recent data set to take into account the historic seismicity and updated instrumental seismicity. A homogenous earthquake catalogue was compiled and a proposed seismic sources model was presented. Attenuation equations that recently recommended by experts and developed based upon earthquake data obtained from tectonic environments similar to those in and around the studied area were weighted and used for assessment of seismic hazard in the frame of logic tree approach. Considering a grid of 1.2 × 1.2 km covering the study area, ground acceleration for every node was calculated. Hazard maps at bedrock conditions were produced for peak ground acceleration, in addition to return periods of 74, 475 and 2475 years.

  5. Controlled Source 4D Seismic Imaging

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Morency, C.; Tromp, J.

    2009-12-01

    Earth's material properties may change after significant tectonic events, e.g., volcanic eruptions, earthquake ruptures, landslides, and hydrocarbon migration. While many studies focus on how to interpret observations in terms of changes in wavespeeds and attenuation, the oil industry is more interested in how we can identify and locate such temporal changes using seismic waves generated by controlled sources. 4D seismic analysis is indeed an important tool to monitor fluid movement in hydrocarbon reservoirs during production, improving fields management. Classic 4D seismic imaging involves comparing images obtained from two subsequent seismic surveys. Differences between the two images tell us where temporal changes occurred. However, when the temporal changes are small, it may be quite hard to reliably identify and characterize the differences between the two images. We propose to back-project residual seismograms between two subsequent surveys using adjoint methods, which results in images highlighting temporal changes. We use the SEG/EAGE salt dome model to illustrate our approach. In two subsequent surveys, the wavespeeds and density within a target region are changed, mimicking possible fluid migration. Due to changes in material properties induced by fluid migration, seismograms recorded in the two surveys differ. By back propagating these residuals, the adjoint images identify the location of the affected region. An important issue involves the nature of model. For instance, are we characterizing only changes in wavespeed, or do we also consider density and attenuation? How many model parameters characterize the model, e.g., is our model isotropic or anisotropic? Is acoustic wave propagation accurate enough or do we need to consider elastic or poroelastic effects? We will investigate how imaging strategies based upon acoustic, elastic and poroelastic simulations affect our imaging capabilities.

  6. Near-Source Shaking and Dynamic Rupture in Plastic Media

    NASA Astrophysics Data System (ADS)

    Gabriel, A.; Mai, P. M.; Dalguer, L. A.; Ampuero, J. P.

    2012-12-01

    Recent well recorded earthquakes show a high degree of complexity at the source level that severely affects the resulting ground motion in near and far-field seismic data. In our study, we focus on investigating source-dominated near-field ground motion features from numerical dynamic rupture simulations in an elasto-visco-plastic bulk. Our aim is to contribute to a more direct connection from theoretical and computational results to field and seismological observations. Previous work showed that a diversity of rupture styles emerges from simulations on faults governed by velocity-and-state-dependent friction with rapid velocity-weakening at high slip rate. For instance, growing pulses lead to re-activation of slip due to gradual stress build-up near the hypocenter, as inferred in some source studies of the 2011 Tohoku-Oki earthquake. Moreover, off-fault energy dissipation implied physical limits on extreme ground motion by limiting peak slip rate and rupture velocity. We investigate characteristic features in near-field strong ground motion generated by dynamic in-plane rupture simulations. We present effects of plasticity on source process signatures, off-fault damage patterns and ground shaking. Independent of rupture style, asymmetric damage patterns across the fault are produced that contribute to the total seismic moment, and even dominantly at high angles between the fault and the maximum principal background stress. The off-fault plastic strain fields induced by transitions between rupture styles reveal characteristic signatures of the mechanical source processes during the transition. Comparing different rupture styles in elastic and elasto-visco-plastic media to identify signatures of off-fault plasticity, we find varying degrees of alteration of near-field radiation due to plastic energy dissipation. Subshear pulses suffer more peak particle velocity reduction due to plasticity than cracks. Supershear ruptures are affected even more. The occurrence of multiple rupture fronts affect seismic potency release rate, amplitude spectra, peak particle velocity distributions and near-field seismograms. Our simulations enable us to trace features of source processes in synthetic seismograms, for example exhibiting a re-activation of slip. Such physical models may provide starting points for future investigations of field properties of earthquake source mechanisms and natural fault conditions. In the long-term, our findings may be helpful for seismic hazard analysis and the improvement of seismic source models.

  7. The Effects of Realistic Geological Heterogeneity on Seismic Modeling: Applications in Shear Wave Generation and Near-Surface Tunnel Detection

    NASA Astrophysics Data System (ADS)

    Sherman, Christopher Scott

    Naturally occurring geologic heterogeneity is an important, but often overlooked, aspect of seismic wave propagation. This dissertation presents a strategy for modeling the effects of heterogeneity using a combination of geostatistics and Finite Difference simulation. In the first chapter, I discuss my motivations for studying geologic heterogeneity and seis- mic wave propagation. Models based upon fractal statistics are powerful tools in geophysics for modeling heterogeneity. The important features of these fractal models are illustrated using borehole log data from an oil well and geomorphological observations from a site in Death Valley, California. A large part of the computational work presented in this disserta- tion was completed using the Finite Difference Code E3D. I discuss the Python-based user interface for E3D and the computational strategies for working with heterogeneous models developed over the course of this research. The second chapter explores a phenomenon observed for wave propagation in heteroge- neous media - the generation of unexpected shear wave phases in the near-source region. In spite of their popularity amongst seismic researchers, approximate methods for modeling wave propagation in these media, such as the Born and Rytov methods or Radiative Trans- fer Theory, are incapable of explaining these shear waves. This is primarily due to these method's assumptions regarding the coupling of near-source terms with the heterogeneities and mode conversion. To determine the source of these shear waves, I generate a suite of 3D synthetic heterogeneous fractal geologic models and use E3D to simulate the wave propaga- tion for a vertical point force on the surface of the models. I also present a methodology for calculating the effective source radiation patterns from the models. The numerical results show that, due to a combination of mode conversion and coupling with near-source hetero- geneity, shear wave energy on the order of 10% of the compressional wave energy may be generated within the shear radiation node of the source. Interestingly, in some cases this shear wave may arise as a coherent pulse, which may be used to improve seismic imaging efforts. In the third and fourth chapters, I discuss the results of a numerical analysis and field study of seismic near-surface tunnel detection methods. Detecting unknown tunnels and voids, such as old mine workings or solution cavities in karst terrain, is a challenging prob- lem in geophysics and has implications for geotechnical design, public safety, and domestic security. Over the years, a number of different geophysical methods have been developed to locate these objects (microgravity, resistivity, seismic diffraction, etc.), each with varying results. One of the major challenges facing these methods is understanding the influence of geologic heterogeneity on their results, which makes this problem a natural extension of the modeling work discussed in previous chapters. In the third chapter, I present the results of a numerical study of surface-wave based tunnel detection methods. The results of this analysis show that these methods are capable of detecting a void buried within one wavelength of the surface, with size potentially much less than one wavelength. In addition, seismic surface- wave based detection methods are effective in media with moderate heterogeneity (epsilon < 5 %), and in fact, this heterogeneity may serve to increase the resolution of these methods. In the fourth chapter, I discuss the results of a field study of tunnel detection methods at a site within the Black Diamond Mines Regional Preserve, near Antioch California. I use a com- bination of surface wave backscattering, 1D surface wave attenuation, and 2D attenuation tomography to locate and determine the condition of two tunnels at this site. These results compliment the numerical study in chapter 3 and highlight their usefulness for detecting tunnels at other sites.

  8. The 2014, MW6.9 North Aegean earthquake: seismic and geodetic evidence for coseismic slip on persistent asperities

    NASA Astrophysics Data System (ADS)

    Konca, Ali Ozgun; Cetin, Seda; Karabulut, Hayrullah; Reilinger, Robert; Dogan, Ugur; Ergintav, Semih; Cakir, Ziyadin; Tari, Ergin

    2018-05-01

    We report that asperities with the highest coseismic slip in the 2014 MW6.9 North Aegean earthquake persisted through the interseismic, coseismic and immediate post-seismic periods. We use GPS and seismic data to obtain the source model of the 2014 earthquake, which is located on the western extension of the North Anatolian Fault (NAF). The earthquake ruptured a bilateral, 90 km strike-slip fault with three slip patches: one asperity located west of the hypocentre and two to the east with a rupture duration of 40 s. Relocated pre-earthquake seismicity and aftershocks show that zones with significant coseismic slip were relatively quiet during both the 7 yr of interseismic and the 3-month aftershock periods, while the surrounding regions generated significant seismicity during both the interseismic and post-seismic periods. We interpret the unusually long fault length and source duration, and distribution of pre- and post-main-shock seismicity as evidence for a rupture of asperities that persisted through strain accumulation and coseismic strain release in a partially coupled fault zone. We further suggest that the association of seismicity with fault creep may characterize the adjacent Izmit, Marmara Sea and Saros segments of the NAF. Similar behaviour has been reported for sections of the San Andreas Fault, and some large subduction zones, suggesting that the association of seismicity with creeping fault segments and rapid relocking of asperities may characterize many large earthquake faults.

  9. Dense surface seismic data confirm non-double-couple source mechanisms induced by hydraulic fracturing

    USGS Publications Warehouse

    Pesicek, Jeremy; Cieślik, Konrad; Lambert, Marc-André; Carrillo, Pedro; Birkelo, Brad

    2016-01-01

    We have determined source mechanisms for nine high-quality microseismic events induced during hydraulic fracturing of the Montney Shale in Canada. Seismic data were recorded using a dense regularly spaced grid of sensors at the surface. The design and geometry of the survey are such that the recorded P-wave amplitudes essentially map the upper focal hemisphere, allowing the source mechanism to be interpreted directly from the data. Given the inherent difficulties of computing reliable moment tensors (MTs) from high-frequency microseismic data, the surface amplitude and polarity maps provide important additional confirmation of the source mechanisms. This is especially critical when interpreting non-shear source processes, which are notoriously susceptible to artifacts due to incomplete or inaccurate source modeling. We have found that most of the nine events contain significant non-double-couple (DC) components, as evident in the surface amplitude data and the resulting MT models. Furthermore, we found that source models that are constrained to be purely shear do not explain the data for most events. Thus, even though non-DC components of MTs can often be attributed to modeling artifacts, we argue that they are required by the data in some cases, and can be reliably computed and confidently interpreted under favorable conditions.

  10. Propagation of Exploration Seismic Sources in Shallow Water

    NASA Astrophysics Data System (ADS)

    Diebold, J. B.; Tolstoy, M.; Barton, P. J.; Gulick, S. P.

    2006-05-01

    The choice of safety radii to mitigation the impact of exploration seismic sources upon marine mammals is typically based on measurement or modeling in deep water. In shallow water environments, rule-of-thumb spreading laws are often used to predict the falloff of amplitude with offset from the source, but actual measurements (or ideally, near-perfect modeling) are still needed to account for the effects of bathymetric changes and subseafloor characteristics. In addition, the question: "how shallow is 'shallow?'" needs an answer. In a cooperative effort by NSF, MMS, NRL, IAGC and L-DEO, a series of seismic source calibration studies was carried out in the Northern Gulf of Mexico during 2003. The sources used were the two-, six-, ten-, twelve-, and twenty-airgun arrays of R/V Ewing, and a 31-element, 3-string "G" gun array, deployed by M/V Kondor, an exploration industry source ship. The results of the Ewing calibrations have been published, documenting results in deep (3200m) and shallow (60m) water. Lengthy analysis of the Kondor results, presented here, suggests an approach to answering the "how shallow is shallow" question. After initially falling off steadily with source-receiver offset, the Kondor levels suddenly increased at a 4km offset. Ray-based modeling with a complex, realistic source, but with a simple homogeneous water column-over-elastic halfspace ocean shows that the observed pattern is chiefly due to geophysical effects, and not focusing within the water column. The same kind of modeling can be used to predict how the amplitudes will change with decreasing water depth, and when deep-water safety radii may need to be increased. Another set of data (see Barton, et al., this session) recorded in 20 meters of water during early 2005, however, shows that simple modeling may be insufficient when the geophysics becomes more complex. In this particular case, the fact that the seafloor was within the near field of the R/V Ewing source array seems to have given rise to seismic phases not normally seen in marine survey data acquired in deeper water. The associated partitioning of energy is likely to have caused the observed uncharacteristically rapid loss of energy with distance. It appears that in this case, the shallow-water marine mammal safety mitigation measures prescribed and followed were far more stringent than they needed to be. A new approach, wherein received levels detected by the towed 6-km multichannel hydrophone array may be used to modify safety radii has recently been proposed, based on these observations.

  11. Advancing Explosion Source Theory through Experimentation: Results from Seismic Experiments Since the Moratorium on Nuclear Testing

    NASA Astrophysics Data System (ADS)

    Bonner, J. L.; Stump, B. W.

    2011-12-01

    On 23 September 1992, the United States conducted the nuclear explosion DIVIDER at the Nevada Test Site (NTS). It would become the last US nuclear test when a moratorium ended testing the following month. Many of the theoretical explosion seismic models used today were developed from observations of hundreds of nuclear tests at NTS and around the world. Since the moratorium, researchers have turned to chemical explosions as a possible surrogate for continued nuclear explosion research. This talk reviews experiments since the moratorium that have used chemical explosions to advance explosion source models. The 1993 Non-Proliferation Experiment examined single-point, fully contained chemical-nuclear equivalence by detonating over a kiloton of chemical explosive at NTS in close proximity to previous nuclear explosion tests. When compared with data from these nearby nuclear explosions, the regional and near-source seismic data were found to be essentially identical after accounting for different yield scaling factors for chemical and nuclear explosions. The relationship between contained chemical explosions and large production mining shots was studied at the Black Thunder coal mine in Wyoming in 1995. The research led to an improved source model for delay-fired mining explosions and a better understanding of mining explosion detection by the International Monitoring System (IMS). The effect of depth was examined in a 1997 Kazakhstan Depth of Burial experiment. Researchers used local and regional seismic observations to conclude that the dominant mechanism for enhanced regional shear waves was local Rg scattering. Travel-time calibration for the IMS was the focus of the 1999 Dead Sea Experiment where a 10-ton shot was recorded as far away as 5000 km. The Arizona Source Phenomenology Experiments provided a comparison of fully- and partially-contained chemical shots with mining explosions, thus quantifying the reduction in seismic amplitudes associated with partial confinement. The Frozen Rock Experiment in 2006 found only minor differences in seismic coupling for explosions in frozen and unfrozen rock. The seismo-acoustic source function was the focus of the above- and below-ground Humble Redwood explosions (2007, 2009 ) in New Mexico and detonations of rocket motor explosions in Utah. Acoustic travel time calibration for the IMS was accomplished with the 2009 and 2011 100-ton surface explosions in southern Israel. The New England Damage Experiment in 2009 correlated increased shear wave generation with increased rock damage from explosions. Damage from explosions continues to be an important research topic at Nevada's National Center for Nuclear Security with the ongoing Source Physics Experiment. A number of exciting experiments are already planned for the future and thus continue the effort to improve global detection, location, and identification of nuclear explosions.

  12. A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta

    NASA Astrophysics Data System (ADS)

    Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.

    2015-12-01

    Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.

  13. Long period seismic source characterization at Popocatépetl volcano, Mexico

    USGS Publications Warehouse

    Arciniega-Ceballos, Alejandra; Dawson, Phillip; Chouet, Bernard A.

    2012-01-01

    The seismicity of Popocatépetl is dominated by long-period and very-long period signals associated with hydrothermal processes and magmatic degassing. We model the source mechanism of repetitive long-period signals in the 0.4–2 s band from a 15-station broadband network by stacking long-period events with similar waveforms to improve the signal-to-noise ratio. The data are well fitted by a point source located within the summit crater ~250 m below the crater floor and ~200 m from the inferred magma conduit. The inferred source includes a volumetric component that can be modeled as resonance of a horizontal steam-filled crack and a vertical single force component. The long-period events are thought to be related to the interaction between the magmatic system and a perched hydrothermal system. Repetitive injection of fluid into the horizontal fracture and subsequent sudden discharge when a critical pressure threshold is met provides a non-destructive source process.

  14. The Source Inversion Validation (SIV) Initiative: A Collaborative Study on Uncertainty Quantification in Earthquake Source Inversions

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Schorlemmer, D.; Page, M.

    2012-04-01

    Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

  15. Estimating rupture distances without a rupture

    USGS Publications Warehouse

    Thompson, Eric M.; Worden, Charles

    2017-01-01

    Most ground motion prediction equations (GMPEs) require distances that are defined relative to a rupture model, such as the distance to the surface projection of the rupture (RJB) or the closest distance to the rupture plane (RRUP). There are a number of situations in which GMPEs are used where it is either necessary or advantageous to derive rupture distances from point-source distance metrics, such as hypocentral (RHYP) or epicentral (REPI) distance. For ShakeMap, it is necessary to provide an estimate of the shaking levels for events without rupture models, and before rupture models are available for events that eventually do have rupture models. In probabilistic seismic hazard analysis, it is often convenient to use point-source distances for gridded seismicity sources, particularly if a preferred orientation is unknown. This avoids the computationally cumbersome task of computing rupture-based distances for virtual rupture planes across all strikes and dips for each source. We derive average rupture distances conditioned on REPI, magnitude, and (optionally) back azimuth, for a variety of assumed seismological constraints. Additionally, we derive adjustment factors for GMPE standard deviations that reflect the added uncertainty in the ground motion estimation when point-source distances are used to estimate rupture distances.

  16. Joint probabilistic determination of earthquake location and velocity structure: application to local and regional events

    NASA Astrophysics Data System (ADS)

    Beucler, E.; Haugmard, M.; Mocquet, A.

    2016-12-01

    The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.

  17. NetMOD version 1.0 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion John

    2014-01-01

    NetMOD (Network Monitoring for Optimal Detection) is a Java-based software package for conducting simulation of seismic networks. Specifically, NetMOD simulates the detection capabilities of seismic monitoring networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed atmore » each of the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This manual describes how to configure and operate NetMOD to perform seismic detection simulations. In addition, NetMOD is distributed with a simulation dataset for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) International Monitoring System (IMS) seismic network for the purpose of demonstrating NetMOD's capabilities and providing user training. The tutorial sections of this manual use this dataset when describing how to perform the steps involved when running a simulation.« less

  18. Development of Towed Marine Seismic Vibrator as an Alternative Seismic Source

    NASA Astrophysics Data System (ADS)

    Ozasa, H.; Mikada, H.; Murakami, F.; Jamali Hondori, E.; Takekawa, J.; Asakawa, E.; Sato, F.

    2015-12-01

    The principal issue with respect to marine impulsive sources to acquire seismic data is if the emission of acoustic energy inflicts harm on marine mammals or not, since the volume of the source signal being released into the marine environment could be so large compared to the sound range of the mammals. We propose a marine seismic vibrator as an alternative to the impulsive sources to mitigate a risk of the impact to the marine environment while satisfying the necessary conditions of seismic surveys. These conditions include the repeatability and the controllability of source signals both in amplitude and phase for high-quality measurements. We, therefore, designed a towed marine seismic vibrator (MSV) as a new type marine vibratory seismic source that employed the hydraulic servo system for the controllability condition in phase and in amplitude that assures the repeatability as well. After fabricating a downsized MSV that requires the power of 30 kVA at a depth of about 250 m in water, several sea trials were conducted to test the source characteristics of the downsized MSV in terms of amplitude, frequency, horizontal and vertical directivities of the generated field. The maximum sound level satisfied the designed specification in the frequencies ranging from 3 to 300 Hz almost omnidirectionally. After checking the source characteristics, we then conducted a trial seismic survey, using both the downsized MSV and an airgun of 480 cubic-inches for comparison, with a streamer cable of 2,000m long right above a cabled earthquake observatory in the Japan Sea. The result showed that the penetration of seismic signals generated by the downsized MSV was comparable to that by the airgun, although there was a slight difference in the signal-to-noise ratio. The MSV could become a versatile source that will not harm living marine mammals as an alternative to the existing impulsive seismic sources such as airgun.

  19. Seismic waves and earthquakes in a global monolithic model

    NASA Astrophysics Data System (ADS)

    Roubíček, Tomáš

    2018-03-01

    The philosophy that a single "monolithic" model can "asymptotically" replace and couple in a simple elegant way several specialized models relevant on various Earth layers is presented and, in special situations, also rigorously justified. In particular, global seismicity and tectonics is coupled to capture, e.g., (here by a simplified model) ruptures of lithospheric faults generating seismic waves which then propagate through the solid-like mantle and inner core both as shear (S) or pressure (P) waves, while S-waves are suppressed in the fluidic outer core and also in the oceans. The "monolithic-type" models have the capacity to describe all the mentioned features globally in a unified way together with corresponding interfacial conditions implicitly involved, only when scaling its parameters appropriately in different Earth's layers. Coupling of seismic waves with seismic sources due to tectonic events is thus an automatic side effect. The global ansatz is here based, rather for an illustration, only on a relatively simple Jeffreys' viscoelastic damageable material at small strains whose various scaling (limits) can lead to Boger's viscoelastic fluid or even to purely elastic (inviscid) fluid. Self-induced gravity field, Coriolis, centrifugal, and tidal forces are counted in our global model, as well. The rigorous mathematical analysis as far as the existence of solutions, convergence of the mentioned scalings, and energy conservation is briefly presented.

  20. Seismic Source Scaling and Discrimination in Diverse Tectonic Environments

    DTIC Science & Technology

    2009-09-30

    3349-3352. Imanishi, K., W. L. Ellsworth, and S. G. Prejean (2004). Earthquake source parameters determined by the SAFOD Pilot Hole seismic array ... seismic discrimination by performing a thorough investigation of* earthquake source scaling using diverse, high-quality datascts from varied tectonic...these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity

  1. Utilizing the R/V Marcus G. Langseth’s streamer to measure the acoustic radiation of its seismic source in the shallow waters of New Jersey’s continental shelf

    PubMed Central

    Tolstoy, Maya; Gibson, James C.; Mountain, Gregory

    2017-01-01

    Shallow water marine seismic surveys are necessary to understand a range of Earth processes in coastal environments, including those that represent major hazards to society such as earthquakes, tsunamis, and sea-level rise. Predicting the acoustic radiation of seismic sources in shallow water, which is required for compliance with regulations designed to limit impacts on protected marine species, is a significant challenge in this environment because of variable reflectivity due to local geology, and the susceptibility of relatively small bathymetric features to focus or shadow acoustic energy. We use data from the R/V Marcus G. Langseth’s towed hydrophone streamer to estimate the acoustic radiation of the ship’s seismic source during a large survey of the shallow shelf off the coast of New Jersey. We use the results to estimate the distances from the source to acoustic levels of regulatory significance, and use bathymetric data from the ship’s multibeam system to explore the relationships between seafloor depth and slope and the measured acoustic radiation patterns. We demonstrate that existing models significantly overestimate mitigation radii, but that the variability of received levels in shallow water suggest that in situ real-time measurements would help improve these estimates, and that post-cruise revisions of received levels are valuable in accurately determining the potential acoustic impact of a seismic survey. PMID:28800634

  2. Comparisons of Source Characteristics between Recent Inland Crustal Earthquake Sequences inside and outside of Niigata-Kobe Tectonic Zone, Japan

    NASA Astrophysics Data System (ADS)

    Somei, K.; Asano, K.; Iwata, T.; Miyakoshi, K.

    2012-12-01

    After the 1995 Kobe earthquake, many M7-class inland earthquakes occurred in Japan. Some of those events (e.g., the 2004 Chuetsu earthquake) occurred in a tectonic zone which is characterized as a high strain rate zone by the GPS observation (Sagiya et al., 2000) or dense distribution of active faults. That belt-like zone along the coast in Japan Sea side of Tohoku and Chubu districts, and north of Kinki district, is called as the Niigata-Kobe tectonic zone (NKTZ, Sagiya et al, 2000). We investigate seismic scaling relationship for recent inland crustal earthquake sequences in Japan and compare source characteristics between events occurring inside and outside of NKTZ. We used S-wave coda part for estimating source spectra. Source spectral ratio is obtained by S-wave coda spectral ratio between the records of large and small events occurring close to each other from nation-wide strong motion network (K-NET and KiK-net) and broad-band seismic network (F-net) to remove propagation-path and site effects. We carefully examined the commonality of the decay of coda envelopes between event-pair records and modeled the observed spectral ratio by the source spectral ratio function with assuming omega-square source model for large and small events. We estimated the corner frequencies and seismic moment (ratio) from those modeled spectral ratio function. We determined Brune's stress drops of 356 events (Mw: 3.1-6.9) in ten earthquake sequences occurring in NKTZ and six sequences occurring outside of NKTZ. Most of source spectra obey omega-square source spectra. There is no obvious systematic difference between stress drops of events in NKTZ zone and others. We may conclude that the systematic tendency of seismic source scaling of the events occurred inside and outside of NKTZ does not exist and the average source scaling relationship can be effective for inland crustal earthquakes. Acknowledgements: Waveform data were provided from K-NET, KiK-net and F-net operated by National Research Institute for Earth Science and Disaster Prevention Japan. This study is supported by Multidisciplinary research project for Niigata-Kobe tectonic zone promoted by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.

  3. Analytical magmatic source modelling from a joint inversion of ground deformation and focal mechanisms data

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Scandura, Danila; Palano, Mimmo; Musumeci, Carla

    2014-05-01

    Seismicity and ground deformation represent the principal geophysical methods for volcano monitoring and provide important constraints on subsurface magma movements. The occurrence of migrating seismic swarms, as observed at several volcanoes worldwide, are commonly associated with dike intrusions. In addition, on active volcanoes, (de)pressurization and/or intrusion of magmatic bodies stress and deform the surrounding crustal rocks, often causing earthquakes randomly distributed in time within a volume extending about 5-10 km from the wall of the magmatic bodies. Despite advances in space-based, geodetic and seismic networks have significantly improved volcano monitoring in the last decades on an increasing worldwide number of volcanoes, quantitative models relating deformation and seismicity are not common. The observation of several episodes of volcanic unrest throughout the world, where the movement of magma through the shallow crust was able to produce local rotation of the ambient stress field, introduces an opportunity to improve the estimate of the parameters of a deformation source. In particular, during these episodes of volcanic unrest a radial pattern of P-axes of the focal mechanism solutions, similar to that of ground deformation, has been observed. Therefore, taking into account additional information from focal mechanisms data, we propose a novel approach to volcanic source modeling based on the joint inversion of deformation and focal plane solutions assuming that both observations are due to the same source. The methodology is first verified against a synthetic dataset of surface deformation and strain within the medium, and then applied to real data from an unrest episode occurred before the May 13th 2008 eruption at Mt. Etna (Italy). The main results clearly indicate as the joint inversion improves the accuracy of the estimated source parameters of about 70%. The statistical tests indicate that the source depth is the parameter with the highest increment of accuracy. In addition a sensitivity analysis confirms that displacements data are more useful to constrain the pressure and the horizontal location of the source than its depth, while the P-axes better constrain the depth estimation.

  4. Large Historical Earthquakes and Tsunami Hazards in the Western Mediterranean: Source Characteristics and Modelling

    NASA Astrophysics Data System (ADS)

    Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said

    2010-05-01

    The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.

  5. Tsunami simulation using submarine displacement calculated from simulation of ground motion due to seismic source model

    NASA Astrophysics Data System (ADS)

    Akiyama, S.; Kawaji, K.; Fujihara, S.

    2013-12-01

    Since fault fracturing due to an earthquake can simultaneously cause ground motion and tsunami, it is appropriate to evaluate the ground motion and the tsunami by single fault model. However, several source models are used independently in the ground motion simulation or the tsunami simulation, because of difficulty in evaluating both phenomena simultaneously. Many source models for the 2011 off the Pacific coast of Tohoku Earthquake are proposed from the inversion analyses of seismic observations or from those of tsunami observations. Most of these models show the similar features, which large amount of slip is located at the shallower part of fault area near the Japan Trench. This indicates that the ground motion and the tsunami can be evaluated by the single source model. Therefore, we examine the possibility of the tsunami prediction, using the fault model estimated from seismic observation records. In this study, we try to carry out the tsunami simulation using the displacement field of oceanic crustal movements, which is calculated from the ground motion simulation of the 2011 off the Pacific coast of Tohoku Earthquake. We use two fault models by Yoshida et al. (2011), which are based on both the teleseismic body wave and on the strong ground motion records. Although there is the common feature in those fault models, the amount of slip near the Japan trench is lager in the fault model from the strong ground motion records than in that from the teleseismic body wave. First, the large-scale ground motion simulations applying those fault models used by the voxel type finite element method are performed for the whole eastern Japan. The synthetic waveforms computed from the simulations are generally consistent with the observation records of K-NET (Kinoshita (1998)) and KiK-net stations (Aoi et al. (2000)), deployed by the National Research Institute for Earth Science and Disaster Prevention (NIED). Next, the tsunami simulations are performed by the finite difference calculation based on the shallow water theory. The initial wave height for tsunami generation is estimated from the vertical displacement of ocean bottom due to the crustal movements, which is obtained from the ground motion simulation mentioned above. The results of tsunami simulations are compared with the observations of the GPS wave gauges to evaluate the validity for the tsunami prediction using the fault model based on the seismic observation records.

  6. Source characterization of a small earthquake cluster at Edmond, Oklahoma using a very dense array

    NASA Astrophysics Data System (ADS)

    Ng, R.; Nakata, N.

    2017-12-01

    Recent seismicity in Oklahoma has caught the attention of the public in the last few years since seismicity is commonly related to loss in urban areas. To account for the increase in public interest, improve the understanding of damaging ground motions produced in earthquakes and develop better seismic hazard assessment, we must characterize the seismicity in Oklahoma and its associated structure and source parameters. Regional changes in subsurface stresses have increased seismic activities due to reactivation of faults in places such as central Oklahoma. It is imperative for seismic investigation and modeling to characterize subsurface structural features that may influence the damaging effects of ground motion. We analyze the full-waveform data collected from a temporary dense array of 72 portable seismometers with a 110 meter spacing that were active for a one-month period from May to June 2017, deployed at Edmond, Oklahoma. The data from this one-month duration array captured over 10,000 events and enabled us to make measurements of small-scale lateral variations of earthquake wavefields. We examine the waveform for events using advanced methods of detection, location and determine the source mechanism. We compare our results with selected events listed in the Oklahoma Geological Survey (OGS) and United States Geological Survey (USGS) catalogue. Based on the detection and located small events, we will discuss the causative fault structure at the area and present the results of the investigation.

  7. Wave propagation modelling of induced earthquakes at the Groningen gas production site

    NASA Astrophysics Data System (ADS)

    Paap, Bob; Kraaijpoel, Dirk; Bakker, Marcel; Gharti, Hom Nath

    2018-06-01

    Gas extraction from the Groningen natural gas field, situated in the Netherlands, frequently induces earthquakes in the reservoir that cause damage to buildings and pose a safety hazard and a nuisance to the local population. Due to the dependence of the national heating infrastructure on Groningen gas, the short-term mitigation measures are mostly limited to a combination of spatiotemporal redistribution of gas production and strengthening measures for buildings. All options become more effective with a better understanding of both source processes and seismic wave propagation. Detailed wave propagation simulations improve both the inference of source processes from observed ground motions and the forecast of ground motions as input for hazard studies and seismic network design. The velocity structure at the Groningen site is relatively complex, including both deep high-velocity and shallow low-velocity deposits showing significant thickness variations over relatively small spatial extents. We performed a detailed three-dimensional wave propagation modelling study for an induced earthquake in the Groningen natural gas field using the spectral-element method. We considered an earthquake that nucleated along a normal fault with local magnitude of {{{M}}_{{L}}} = 3. We created a dense mesh with element size varying from 12 to 96 m, and used a source frequency of 7 Hz, such that frequencies generated during the simulation were accurately sampled up to 10 Hz. The velocity/density model is constructed using a three-dimensional geological model of the area, including both deep high-velocity salt deposits overlying the source region and shallow low-velocity sediments present in a deep but narrow tunnel valley. The results show that the three-dimensional density/velocity model in the Groningen area clearly play a large role in the wave propagation and resulting surface ground motions. The 3d structure results in significant lateral variations in site response. The high-velocity salt deposits have a dispersive effect on the radiated wavefield, reducing the seismic energy reaching the surface near the epicentre. In turn, the presence of low-velocity tunnel valley deposits can locally cause a significant increase in peak ground acceleration. Here we study induced seismicity on a local scale and use SPECFEM3D to conduct full waveform simulations and show how local velocity variations can affect seismic records.

  8. Investigation of Seismic Waves from Non-Natural Sources: A Case Study for Building Collapse and Surface Explosion

    NASA Astrophysics Data System (ADS)

    Houng, S.; Hong, T.

    2013-12-01

    The nature and excitation mechanism of incidents or non-natural events have been widely investigated using seismological techniques. With introduction of dense seismic networks, small-sized non-natural events such as building collapse and chemical explosions are well recorded. Two representative non-natural seismic sources are investigated. A 5-story building in South Korea, Sampoong department store, was collapsed in June 25, 1995, causing casualty of 1445. This accident is known to be the second deadliest non-terror-related building collapse in the world. The event was well recorded by a local station in ~ 9 km away. P and S waves were recorded weak, while monotonic Rayleigh waves were observed well. The origin time is determined using surface-wave arrival time. The magnitude of event is determined to be 1.2, which coincides with a theoretical estimate based on the mass and volume of building. Synthetic waveforms are modeled for various combinations of velocity structures and source time functions, which allow us to constrain the process of building collapse. It appears that the building was collapsed once within a couple of seconds. We also investigate a M2.1 chemical explosion at a fertilizer plant in Texas on April 18, 2013. It was reported that more than one hundred people were dead or injured by the explosion. Seismic waveforms for nearby stations are collected from Incorporated Research Institution of Seismology (IRIS). The event was well recorded at stations in ~500 km away from the source. Strong acoustic signals were observed at stations in a certain great-circle direction. This observation suggests preferential propagation of acoustic waves depending on atmospheric environment. Waveform cross-correlation, spectral analysis and waveform modeling are applied to understand the source physics. We discuss the nature of source and source excitation mechanism.

  9. Analysis and Simulation of Near-Field Wave Motion Data from the Source Physics Experiment Explosions

    DTIC Science & Technology

    2011-09-01

    understanding and ability to model explosively generated seismic waves, particularly S-waves. The first SPE explosion (SPE1) consisted of a 100 kg shot at a...depth of 60 meters in granite (Climax Stock). The shot was well- recorded by an array of over 150 instruments, including both near-field wave motion...measurements as well as far- field seismic measurements. This paper focuses on measurements and modeling of the near-field data. A complimentary

  10. Seismic hazards in Thailand: a compilation and updated probabilistic analysis

    NASA Astrophysics Data System (ADS)

    Pailoplee, Santi; Charusiri, Punya

    2016-06-01

    A probabilistic seismic hazard analysis (PSHA) for Thailand was performed and compared to those of previous works. This PSHA was based upon (1) the most up-to-date paleoseismological data (slip rates), (2) the seismic source zones, (3) the seismicity parameters ( a and b values), and (4) the strong ground-motion attenuation models suggested as being suitable models for Thailand. For the PSHA mapping, both the ground shaking and probability of exceedance (POE) were analyzed and mapped using various methods of presentation. In addition, site-specific PSHAs were demonstrated for ten major provinces within Thailand. For instance, a 2 and 10 % POE in the next 50 years of a 0.1-0.4 g and 0.1-0.2 g ground shaking, respectively, was found for western Thailand, defining this area as the most earthquake-prone region evaluated in Thailand. In a comparison between the ten selected specific provinces within Thailand, the Kanchanaburi and Tak provinces had comparatively high seismic hazards, and therefore, effective mitigation plans for these areas should be made. Although Bangkok was defined as being within a low seismic hazard in this PSHA, a further study of seismic wave amplification due to the soft soil beneath Bangkok is required.

  11. Seismic moment tensor inversion using 3D velocity model and its application to the 2013 Lushan earthquake sequence

    NASA Astrophysics Data System (ADS)

    Zhu, Lupei; Zhou, Xiaofeng

    2016-10-01

    Source inversion of small-magnitude events such as aftershocks or mine collapses requires use of relatively high frequency seismic waveforms which are strongly affected by small-scale heterogeneities in the crust. In this study, we developed a new inversion method called gCAP3D for determining general moment tensor of a seismic source using Green's functions of 3D models. It inherits the advantageous features of the ;Cut-and-Paste; (CAP) method to break a full seismogram into the Pnl and surface-wave segments and to allow time shift between observed and predicted waveforms. It uses grid search for 5 source parameters (relative strengths of the isotropic and compensated-linear-vector-dipole components and the strike, dip, and rake of the double-couple component) that minimize the waveform misfit. The scalar moment is estimated using the ratio of L2 norms of the data and synthetics. Focal depth can also be determined by repeating the inversion at different depths. We applied gCAP3D to the 2013 Ms 7.0 Lushan earthquake and its aftershocks using a 3D crustal-upper mantle velocity model derived from ambient noise tomography in the region. We first relocated the events using the double-difference method. We then used the finite-differences method and reciprocity principle to calculate Green's functions of the 3D model for 20 permanent broadband seismic stations within 200 km from the source region. We obtained moment tensors of the mainshock and 74 aftershocks ranging from Mw 5.2 to 3.4. The results show that the Lushan earthquake is a reverse faulting at a depth of 13-15 km on a plane dipping 40-47° to N46° W. Most of the aftershocks occurred off the main rupture plane and have similar focal mechanisms to the mainshock's, except in the proximity of the mainshock where the aftershocks' focal mechanisms display some variations.

  12. Localized time-lapse elastic waveform inversion using wavefield injection and extrapolation: 2-D parametric studies

    NASA Astrophysics Data System (ADS)

    Yuan, Shihao; Fuji, Nobuaki; Singh, Satish; Borisov, Dmitry

    2017-06-01

    We present a methodology to invert seismic data for a localized area by combining source-side wavefield injection and receiver-side extrapolation method. Despite the high resolving power of seismic full waveform inversion, the computational cost for practical scale elastic or viscoelastic waveform inversion remains a heavy burden. This can be much more severe for time-lapse surveys, which require real-time seismic imaging on a daily or weekly basis. Besides, changes of the structure during time-lapse surveys are likely to occur in a small area rather than the whole region of seismic experiments, such as oil and gas reservoir or CO2 injection wells. We thus propose an approach that allows to image effectively and quantitatively the localized structure changes far deep from both source and receiver arrays. In our method, we perform both forward and back propagation only inside the target region. First, we look for the equivalent source expression enclosing the region of interest by using the wavefield injection method. Second, we extrapolate wavefield from physical receivers located near the Earth's surface or on the ocean bottom to an array of virtual receivers in the subsurface by using correlation-type representation theorem. In this study, we present various 2-D elastic numerical examples of the proposed method and quantitatively evaluate errors in obtained models, in comparison to those of conventional full-model inversions. The results show that the proposed localized waveform inversion is not only efficient and robust but also accurate even under the existence of errors in both initial models and observed data.

  13. Finite-difference numerical simulations of underground explosion cavity decoupling

    NASA Astrophysics Data System (ADS)

    Aldridge, D. F.; Preston, L. A.; Jensen, R. P.

    2012-12-01

    Earth models containing a significant portion of ideal fluid (e.g., air and/or water) are of increasing interest in seismic wave propagation simulations. Examples include a marine model with a thick water layer, and a land model with air overlying a rugged topographic surface. The atmospheric infrasound community is currently interested in coupled seismic-acoustic propagation of low-frequency signals over long ranges (~tens to ~hundreds of kilometers). Also, accurate and efficient numerical treatment of models containing underground air-filled voids (caves, caverns, tunnels, subterranean man-made facilities) is essential. In support of the Source Physics Experiment (SPE) conducted at the Nevada National Security Site (NNSS), we are developing a numerical algorithm for simulating coupled seismic and acoustic wave propagation in mixed solid/fluid media. Solution methodology involves explicit, time-domain, finite-differencing of the elastodynamic velocity-stress partial differential system on a three-dimensional staggered spatial grid. Conditional logic is used to avoid shear stress updating within the fluid zones; this approach leads to computational efficiency gains for models containing a significant proportion of ideal fluid. Numerical stability and accuracy are maintained at air/rock interfaces (where the contrast in mass density is on the order of 1 to 2000) via a finite-difference operator "order switching" formalism. The fourth-order spatial FD operator used throughout the bulk of the earth model is reduced to second-order in the immediate vicinity of a high-contrast interface. Current modeling efforts are oriented toward quantifying the amount of atmospheric infrasound energy generated by various underground seismic sources (explosions and earthquakes). Source depth and orientation, and surface topography play obvious roles. The cavity decoupling problem, where an explosion is detonated within an air-filled void, is of special interest. A point explosion source located at the center of a spherical cavity generates only diverging compressional waves. However, we find that shear waves are generated by an off-center source, or by a non-spherical cavity (e.g. a tunnel). Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  14. Medium effect on the characteristics of the coupled seismic and electromagnetic signals.

    PubMed

    Huang, Qinghua; Ren, Hengxin; Zhang, Dan; Chen, Y John

    2015-01-01

    Recently developed numerical simulation technique can simulate the coupled seismic and electromagnetic signals for a double couple point source or a finite fault planar source. Besides the source effect, the simulation results showed that both medium structure and medium property could affect the coupled seismic and electromagnetic signals. The waveform of coupled signals for a layered structure is more complicated than that for a simple uniform structure. Different from the seismic signals, the electromagnetic signals are sensitive to the medium properties such as fluid salinity and fluid viscosity. Therefore, the co-seismic electromagnetic signals may be more informative than seismic signals.

  15. Medium effect on the characteristics of the coupled seismic and electromagnetic signals

    PubMed Central

    HUANG, Qinghua; REN, Hengxin; ZHANG, Dan; CHEN, Y. John

    2015-01-01

    Recently developed numerical simulation technique can simulate the coupled seismic and electromagnetic signals for a double couple point source or a finite fault planar source. Besides the source effect, the simulation results showed that both medium structure and medium property could affect the coupled seismic and electromagnetic signals. The waveform of coupled signals for a layered structure is more complicated than that for a simple uniform structure. Different from the seismic signals, the electromagnetic signals are sensitive to the medium properties such as fluid salinity and fluid viscosity. Therefore, the co-seismic electromagnetic signals may be more informative than seismic signals. PMID:25743062

  16. Time-Reversal Location of the 2004 M6.0 Parkfield Earthquake Using the Vertical Component of Seismic Data.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Johnson, P.; Huang, L.; Randall, G.; Patton, H.; Montagner, J.

    2007-12-01

    In this work we describe Time Reversal experiments applying seismic waves recorded from the 2004 M6.0 Parkfield Earthquake. The reverse seismic wavefield is created by time-reversing recorded seismograms and then injecting them from the seismograph locations into a whole entire Earth velocity model. The concept is identical to acoustic Time-Reversal Mirror laboratory experiments except the seismic data are numerically backpropagated through a velocity model (Fink, 1996; Ulrich et al, 2007). Data are backpropagated using the finite element code SPECFEM3D (Komatitsch et al, 2002), employing the velocity model s20rts (Ritsema et al, 2000). In this paper, we backpropagate only the vertical component of seismic data from about 100 broadband surface stations located worldwide (FDSN), using the period band of 23-120s. We use those only waveforms that are highly correlated with forward-propagated synthetics. The focusing quality depends upon the type of waves back- propagated; for the vertical displacement component the possible types include body waves, Rayleigh waves, or their combination. We show that Rayleigh waves, both real and artifact, dominate the reverse movie in all cases. They are created during rebroadcast of the time reverse signals, including body wave phases, because we use point-like-force sources for injection. The artifact waves, termed "ghosts" manifest as surface waves, do not correspond to real wave phases during the forward propagation. The surface ghost waves can significantly blur the focusing at the source. We find that the ghosts cannot be easily eliminated in the manner described by Tsogka&Papanicolaou (2002). It is necessary to understand how they are created in order to remove them during TRM studies, particularly when using only the body waves. For this moderate magnitude of earthquake we demonstrate the robustness of the TRM as an alternative location method despite the restriction to vertical component phases. One advantage of TRM location is that it does not rely on a prior picking of specific phases (Larmat et al, 2006). In future work will be conducted TRM backpropagation using the horizontal displacement components of seismic data as well as study the source complexity (double couples). Our ultimate goal is to determine whether or not Time Reversal offers information about the source that cannot be obtained from other methods, or that complements other methods.

  17. GDP: A new source for shallow high-resolution seismic exploration

    NASA Astrophysics Data System (ADS)

    Rashed, Mohamed A.

    2009-06-01

    Gas-Driven Piston (GDP) is a new source for shallow seismic exploration. This source works by igniting a small amount of gas inside a closed chamber connected to a vertical steel cylinder. The gas explosion drives a steel piston, mounted inside the cylinder, downward so that the piston's thick head hits a steel base at the end of the cylinder generating a strong shock wave into the ground. Experimental field tests conducted near Ismailia, Egypt, prove that the portable, inexpensive and environmentally benign GDP generates stronger seismic waves than the sledgehammer that is commonly used in shallow seismic exploration. Tests also show that GDP is a highly repeatable and controllable and that its seismic waves contain a good amount of high frequencies which makes the GDP an excellent source for shallow seismic exploration.

  18. Hydraulic transients: a seismic source in volcanoes and glaciers.

    PubMed

    Lawrence, W S; Qamar, A

    1979-02-16

    A source for certain low-frequency seismic waves is postulated in terms of the water hammer effect. The time-dependent displacement of a water-filled sub-glacial conduit is analyzed to demonstrate the nature of the source. Preliminary energy calculations and the observation of hydraulically generated seismic radiation from a dam indicate the plausibility of the proposed source.

  19. Numerical simulation of seismic wave propagation from land-excited large volume air-gun source

    NASA Astrophysics Data System (ADS)

    Cao, W.; Zhang, W.

    2017-12-01

    The land-excited large volume air-gun source can be used to study regional underground structures and to detect temporal velocity changes. The air-gun source is characterized by rich low frequency energy (from bubble oscillation, 2-8Hz) and high repeatability. It can be excited in rivers, reservoirs or man-made pool. Numerical simulation of the seismic wave propagation from the air-gun source helps to understand the energy partitioning and characteristics of the waveform records at stations. However, the effective energy recorded at a distance station is from the process of bubble oscillation, which can not be approximated by a single point source. We propose a method to simulate the seismic wave propagation from the land-excited large volume air-gun source by finite difference method. The process can be divided into three parts: bubble oscillation and source coupling, solid-fluid coupling and the propagation in the solid medium. For the first part, the wavelet of the bubble oscillation can be simulated by bubble model. We use wave injection method combining the bubble wavelet with elastic wave equation to achieve the source coupling. Then, the solid-fluid boundary condition is implemented along the water bottom. And the last part is the seismic wave propagation in the solid medium, which can be readily implemented by the finite difference method. Our method can get accuracy waveform of land-excited large volume air-gun source. Based on the above forward modeling technology, we analysis the effect of the excited P wave and the energy of converted S wave due to different water shapes. We study two land-excited large volume air-gun fields, one is Binchuan in Yunnan, and the other is Hutubi in Xinjiang. The station in Binchuan, Yunnan is located in a large irregular reservoir, the waveform records have a clear S wave. Nevertheless, the station in Hutubi, Xinjiang is located in a small man-made pool, the waveform records have very weak S wave. Better understanding of the characteristics of land-excited large volume air-gun can help to better use of the air-gun source.

  20. Fault- and Area-Based PSHA in Nepal using OpenQuake: New Insights from the 2015 M7.8 Gorkha-Nepal Earthquake

    NASA Astrophysics Data System (ADS)

    Stevens, Victoria

    2017-04-01

    The 2015 Gorkha-Nepal M7.8 earthquake (hereafter known simply as the Gorkha earthquake) highlights the seismic risk in Nepal, allows better characterization of the geometry of the Main Himalayan Thrust (MHT), and enables comparison of recorded ground-motions with predicted ground-motions. These new data, together with recent paleoseismic studies and geodetic-based coupling models, allow for good parameterization of the fault characteristics. Other faults in Nepal remain less well studied. Unlike previous PSHA studies in Nepal that are exclusively area-based, we use a mix of faults and areas to describe six seismic sources in Nepal. For each source, the Gutenberg-Richter a and b values are found, and the maximum magnitude earthquake estimated, using a combination of earthquake catalogs, moment conservation principals and similarities to other tectonic regions. The MHT and Karakoram fault are described as fault sources, whereas four other sources - normal faulting in N-S trending grabens of northern Nepal, strike-slip faulting in both eastern and western Nepal, and background seismicity - are described as area sources. We use OpenQuake (http://openquake.org/) to carry out the analysis, and peak ground acceleration (PGA) at 2 and 10% chance in 50 years is found for Nepal, along with hazard curves at various locations. We compare this PSHA model with previous area-based models of Nepal. The Main Himalayan Thrust is the principal seismic hazard in Nepal so we study the effects of changing several parameters associated with this fault. We compare ground shaking predicted from various fault geometries suggested from the Gorkha earthquake with each other, and with a simple model of a flat fault. We also show the results from incorporating a coupling model based on geodetic data and microseismicity, which limits the down-dip extent of rupture. There have been no ground-motion prediction equations (GMPEs) developed specifically for Nepal, so we compare the results of standard GMPEs used together with an earthquake-scenario representing that of the Gorkha earthquake, with actual data from the Gorkha earthquake itself. The Gorkha earthquake also highlighted the importance of basin-, topographic- and directivity-effects, and the location of high-frequency sources, on influencing ground motion. Future study aims at incorporating the above, together with consideration of the fault-rupture history and its influence on the location and timing of future earthquakes.

  1. Interactive Visualizations of Complex Seismic Data and Models

    NASA Astrophysics Data System (ADS)

    Chai, C.; Ammon, C. J.; Maceira, M.; Herrmann, R. B.

    2016-12-01

    The volume and complexity of seismic data and models have increased dramatically thanks to dense seismic station deployments and advances in data modeling and processing. Seismic observations such as receiver functions and surface-wave dispersion are multidimensional: latitude, longitude, time, amplitude and latitude, longitude, period, and velocity. Three-dimensional seismic velocity models are characterized with three spatial dimensions and one additional dimension for the speed. In these circumstances, exploring the data and models and assessing the data fits is a challenge. A few professional packages are available to visualize these complex data and models. However, most of these packages rely on expensive commercial software or require a substantial time investment to master, and even when that effort is complete, communicating the results to others remains a problem. A traditional approach during the model interpretation stage is to examine data fits and model features using a large number of static displays. Publications include a few key slices or cross-sections of these high-dimensional data, but this prevents others from directly exploring the model and corresponding data fits. In this presentation, we share interactive visualization examples of complex seismic data and models that are based on open-source tools and are easy to implement. Model and data are linked in an intuitive and informative web-browser based display that can be used to explore the model and the features in the data that influence various aspects of the model. We encode the model and data into HTML files and present high-dimensional information using two approaches. The first uses a Python package to pack both data and interactive plots in a single file. The second approach uses JavaScript, CSS, and HTML to build a dynamic webpage for seismic data visualization. The tools have proven useful and led to deeper insight into 3D seismic models and the data that were used to construct them. Such easy-to-use interactive displays are essential in teaching environments - user-friendly interactivity allows students to explore large, complex data sets and models at their own pace, enabling a more accessible learning experience.

  2. Source characterization of underground explosions from hydrodynamic-to-elastic coupling simulations

    NASA Astrophysics Data System (ADS)

    Chiang, A.; Pitarka, A.; Ford, S. R.; Ezzedine, S. M.; Vorobiev, O.

    2017-12-01

    A major improvement in ground motion simulation capabilities for underground explosion monitoring during the first phase of the Source Physics Experiment (SPE) is the development of a wave propagation solver that can propagate explosion generated non-linear near field ground motions to the far-field. The calculation is done using a hybrid modeling approach with a one-way hydrodynamic-to-elastic coupling in three dimensions where near-field motions are computed using GEODYN-L, a Lagrangian hydrodynamics code, and then passed to WPP, an elastic finite-difference code for seismic waveform modeling. The advancement in ground motion simulation capabilities gives us the opportunity to assess moment tensor inversion of a realistic volumetric source with near-field effects in a controlled setting, where we can evaluate the recovered source properties as a function of modeling parameters (i.e. velocity model) and can provide insights into previous source studies on SPE Phase I chemical shots and other historical nuclear explosions. For example the moment tensor inversion of far-field SPE seismic data demonstrated while vertical motions are well-modeled using existing velocity models large misfits still persist in predicting tangential shear wave motions from explosions. One possible explanation we can explore is errors and uncertainties from the underlying Earth model. Here we investigate the recovered moment tensor solution, particularly on the non-volumetric component, by inverting far-field ground motions simulated from physics-based explosion source models in fractured material, where the physics-based source models are based on the modeling of SPE-4P, SPE-5 and SPE-6 near-field data. The hybrid modeling approach provides new prospects in modeling explosion source and understanding the uncertainties associated with it.

  3. A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Ren, Luchuan

    2015-04-01

    A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method

  4. Two-dimensional seismic velocity models of southern Taiwan from TAIGER transects

    NASA Astrophysics Data System (ADS)

    McIntosh, K. D.; Kuochen, H.; Van Avendonk, H. J.; Lavier, L. L.; Wu, F. T.; Okaya, D. A.

    2013-12-01

    We use a broad combination of wide-angle seismic data sets to develop high-resolution crustal-scale, two-dimensional, velocity models across southern Taiwan and the adjacent Huatung Basin. The data were recorded primarily during the TAIGER project and include records of thousands of marine airgun shots, several land explosive sources, and ~90 Earthquakes. Both airgun sources and earthquake data were recorded by dense land arrays, and ocean bottom seismographs (OBS) recorded airgun sources east of Taiwan. This combination of data sets enables us to develop a high-resolution upper- to mid-crustal model defined by marine and explosive sources, while also constraining the full crustal structure - with depths approaching 50 km - by using the earthquake and explosive sources. These data and the resulting models are particularly important for understanding the development of arc-continent collision in Taiwan. McIntosh et al. (2013) have shown that highly extended continental crust of the northeastern South China Sea rifted margin is underthrust at the Manila trench southwest of Taiwan but then is structurally underplated to the accretionary prism. This process of basement accretion is confirmed in the southern Central Range of Taiwan where basement outcrops can be directly linked to high seismic velocities measured in the accretionary prism well south of the continental shelf, even south of Taiwan. These observations indicate that the southern Central Range begins to grow well before there is any direct interaction between the North Luzon arc and the Eurasian continent. Our transects provide information on how the accreted mass behaves as it approaches the continental shelf and on deformation of the arc and forearc as this occurs. We suggest that arc-continent collision in Taiwan actually develops as arc-prism-continent collision.

  5. Numerical modeling and characterization of rock avalanches and associated seismic signal

    NASA Astrophysics Data System (ADS)

    Moretti, L.; Mangeney, A.; Capdeville, Y.; Stutzmann, E.; Lucas, A.; Huggel, C.; Schneider, D.; Crosta, G. B.; Bouchut, F.

    2012-04-01

    Gravitational instabilities, such as landslides, avalanches or debris flows play a key role in erosion processes and represent one of the major natural hazards in mountainous, coastal or volcanic regions. Despite the great amount of field, experimental and numerical work devoted to this problem, the understanding of the physical processes at work in gravitational flow is still an open issue, in particular due to the lack of observations relevant to their dynamics. In this context, the seismic signal generated by gravitational flows is a unique opportunity to get information on their dynamics. Indeed, as shown recently by Favreau et al., (2010), simulation of the seismic signal generated by landslides makes it possible to discriminate different flow scenarios and estimate the rheological parameters during the flow. Because global and regional seismic networks continuously record gravitational instabilities, this new method will help gathering new data on landslide behavior. The purpose of our research is to establish new relations making it possible to extract landslide characteristics such as volume, mass, geometry and location, from seismic observations (amplitude, duration, energy…). The 2005 Mount Steller (Alaska) rock-ice avalanche and the 2004 Thurwieser (Italy) landslide have been simulated [Huggel et al., 2008; Favreau et al., 2010]. The Mount Steller landslide has been recorded by ten seismic stations located between 37 and 630 km from the source (i.e. landquake source) at different azimuths.The Thurwieser landslide was recorded by two seismic stations a few tens kilometers from the landslide . For the two rock avalanches we simulated the associated seismic signal. The comparison between simulated and recorded seismic signal makes it possible to discriminate between different landslides scenarios. Some simulations show a remarkably good fit to the seismic recordings, suggesting that these scenarios are closer to reality. Sensitivity analysis show how the recorded seismic signal depends on the characteristics of the landslide (volume, mass, friction coefficient…) and on the earth model (seismic waves velocity, number of layers…) used to calculate wave propagation. Favreau, P., Mangeney, A., Lucas, A., Crosta, G.B., and F. Bouchut, Numerical modeling of landquakes. Geophysical Research Letters, VOL. 37, L15305, doi:10.1029/2010GL043512, 2010 Huggel, C., Caplan-Auerbach, J., Molnia, B. and Wessels R. (2008), The 2005 Mt. Steller, Alaska, rock-ice avalanche: A large slope failure in cold permafrost, Proceedings of the Ninth International Conference on Permafrost, vol. 1., p. 747-752, Univ. of Alaska Fairbanks

  6. Crustal wavespeed structure of North Texas and Oklahoma based on ambient noise cross-correlation functions and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Zhu, H.

    2017-12-01

    Recently, seismologists observed increasing seismicity in North Texas and Oklahoma. Based on seismic observations and other geophysical measurements, some studies suggested possible links between the increasing seismicity and wastewater injection during unconventional oil and gas exploration. To better monitor seismic events and investigate their mechanisms, we need an accurate 3D crustal wavespeed model for North Texas and Oklahoma. Considering the uneven distribution of earthquakes in this region, seismic tomography with local earthquake records have difficulties to achieve good illumination. To overcome this limitation, in this study, ambient noise cross-correlation functions are used to constrain subsurface variations in wavespeeds. I use adjoint tomography to iteratively fit frequency-dependent phase differences between observed and predicted band-limited Green's functions. The spectral-element method is used to numerically calculate the band-limited Green's functions and the adjoint method is used to calculate misfit gradients with respect to wavespeeds. 25 preconditioned conjugate gradient iterations are used to update model parameters and minimize data misfits. Features in the new crustal model M25 correlates with geological units in the study region, including the Llano uplift, the Anadarko basin and the Ouachita orogenic front. In addition, these seismic anomalies correlate with gravity and magnetic observations. This new model can be used to better constrain earthquake source parameters in North Texas and Oklahoma, such as epicenter location and moment tensor solutions, which are important for investigating potential relations between seismicity and unconventional oil and gas exploration.

  7. Earthquake Forecasting in Northeast India using Energy Blocked Model

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, D. K.

    2009-12-01

    In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes. The proposed process provides a more consistent model of gradual accumulation of strain and non-uniform release through large earthquakes and can be applied in the evaluation of seismic risk. The cumulative seismic energy released by major earthquakes throughout the period from 1897 to 2007 of last 110 years in the all the zones are calculated and plotted. The plot gives characteristics curve for each zone. Each curve is irregular, reflecting occasional high activity. The maximum earthquake energy available at a particular time in a given area is given by S. The difference between the theoretical upper limit given by S and the cumulative energy released up to that time is calculated to find out the maximum magnitude of an earthquake which can occur in future. Energy blocked of the three source regions are 1.35*1017 Joules, 4.25*1017 Joules and 0.12*1017 in Joules respectively for source zone 1, 2 and 3, as a supply for potential earthquakes in due course of time. The predicted maximum magnitude (mmax) obtained for each source zone AYZ, HZ, and SPZ are 8.2, 8.6, and 8.4 respectively by this model. This study is also consistent with the previous predicted results by other workers.

  8. The effect of directivity in a PSHA framework

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Herrero, A.; Cultrera, G.

    2012-09-01

    We propose a method to introduce a refined representation of the ground motion in the framework of the Probabilistic Seismic Hazard Analysis (PSHA). This study is especially oriented to the incorporation of a priori information about source parameters, by focusing on the directivity effect and its influence on seismic hazard maps. Two strategies have been followed. One considers the seismic source as an extended source, and it is valid when the PSHA seismogenetic sources are represented as fault segments. We show that the incorporation of variables related to the directivity effect can lead to variations up to 20 per cent of the hazard level in case of dip-slip faults with uniform distribution of hypocentre location, in terms of spectral acceleration response at 5 s, exceeding probability of 10 per cent in 50 yr. The second one concerns the more general problem of the seismogenetic areas, where each point is a seismogenetic source having the same chance of enucleate a seismic event. In our proposition the point source is associated to the rupture-related parameters, defined using a statistical description. As an example, we consider a source point of an area characterized by strike-slip faulting style. With the introduction of the directivity correction the modulation of the hazard map reaches values up to 100 per cent (for strike-slip, unilateral faults). The introduction of directivity does not increase uniformly the hazard level, but acts more like a redistribution of the estimation that is consistent with the fault orientation. A general increase appears only when no a priori information is available. However, nowadays good a priori knowledge exists on style of faulting, dip and orientation of faults associated to the majority of the seismogenetic zones of the present seismic hazard maps. The percentage of variation obtained is strongly dependent on the type of model chosen to represent analytically the directivity effect. Therefore, it is our aim to emphasize more on the methodology following which, all the information collected may be easily converted to obtain a more comprehensive and meaningful probabilistic seismic hazard formulation.

  9. Reflection processing of the large-N seismic data from the Source Physics Experiment (SPE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paschall, Olivia C.

    2016-07-18

    The purpose of the SPE is to develop a more physics-based model for nuclear explosion identification to understand the development of S-waves from explosion sources in order to enhance nuclear test ban treaty monitoring.

  10. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  11. Source mechanism inversion and ground motion modeling of induced earthquakes in Kuwait - A Bayesian approach

    NASA Astrophysics Data System (ADS)

    Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.

    2016-12-01

    The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw < 5). In 2015, two local earthquakes - Mw4.5 in 03/21/2015 and Mw4.1 in 08/18/2015 - have been recorded by both the Incorporated Research Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw < 5) and are shallow with focal depths of about 2 to 4 km. Such events are very common in oil/gas reservoirs all over the world, including North America, Europe, and the Middle East. We determined the location and source mechanism of these local earthquakes, with the uncertainties, using a Bayesian inversion method. The triggering stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.

  12. Analysis, comparison, and modeling of radar interferometry, date of surface deformation signals associated with underground explosions, mine collapses and earthquakes. Phase I: underground explosions, Nevada Test Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foxall, W; Vincent, P; Walter, W

    1999-07-23

    We have previously presented simple elastic deformation modeling results for three classes of seismic events of concern in monitoring the CTBT--underground explosions, mine collapses and earthquakes. Those results explored the theoretical detectability of each event type using synthetic aperture radar interferometry (InSAR) based on commercially available satellite data. In those studies we identified and compared the characteristics of synthetic interferograms that distinguish each event type, as well the ability of the interferograms to constrain source parameters. These idealized modeling results, together with preliminary analysis of InSAR data for the 1995 mb 5.2 Solvay mine collapse in southwestern Wyoming, suggested thatmore » InSAR data used in conjunction with regional seismic monitoring holds great potential for CTBT discrimination and seismic source analysis, as well as providing accurate ground truth parameters for regional calibration events. In this paper we further examine the detectability and ''discriminating'' power of InSAR by presenting results from InSAR data processing, analysis and modeling of the surface deformation signals associated with underground explosions. Specifically, we present results of a detailed study of coseismic and postseismic surface deformation signals associated with underground nuclear and chemical explosion tests at the Nevada Test Site (NTS). Several interferograms were formed from raw ERS-1/2 radar data covering different time spans and epochs beginning just prior to the last U.S. nuclear tests in 1992 and ending in 1996. These interferograms have yielded information about the nature and duration of the source processes that produced the surface deformations associated with these events. A critical result of this study is that significant post-event surface deformation associated with underground nuclear explosions detonated at depths in excess of 600 meters can be detected using differential radar interferometry. An immediate implication of this finding is that underground nuclear explosions may not need to be captured coseismically by radar images acquired before and after an event in order to be detectable. This has obvious advantages in CTBT monitoring since suspect seismic events--which usually can be located within a 100 km by 100 km area of an ERS-1/2 satellite frame by established seismic methods-can be imaged after the event has been identified and located by existing regional seismic networks. Key Words: InSAR, SLC images, interferogram, synthetic interferogram, ERS-1/2 frame, phase unwrapping, DEM, coseismic, postseismic, source parameters.« less

  13. Seismic interferometry of the Bighorn Mountains: Using virtual source gathers to increase fold in sparse-source, dense-receiver data

    NASA Astrophysics Data System (ADS)

    Plescia, S. M.; Sheehan, A. F.; Haines, S. S.; Cook, S. W.; Worthington, L. L.

    2016-12-01

    The Bighorn Arch Seismic Experiment (BASE) was a combined active- and passive-source seismic experiment designed to image deep structures including the Moho beneath a basement-involved foreland arch. In summer 2010, over 1800 Texan receivers, with 4.5 Hz vertical component geophones, were deployed at 100-m to 1-km spacing in a region spanning the Bighorn Arch and the adjacent Bighorn and Powder River Basins. Twenty explosive sources were used to create seismic energy during a two-week acquisition period. Teleseismic earthquakes and mine blasts were also recorded during this time period. We utilize both virtual source interferometry and traditional reflection processing to better understand the deep crustal features of the region and the Moho. The large number of receivers, compared to the limited, widely spaced (10 - 30 km) active-source shots, makes the data an ideal candidate for virtual source seismic interferometry to increase fold. Virtual source interferometry results in data representing a geometry where receiver locations act as if they were seismic source positions. A virtual source gather, the product of virtual source interferometry, is produced by the cross correlation of one receiver's recording, the reference trace, with the recordings of all other receivers in a given shot gather. The cross correlation is repeated for all shot gathers and the resulting traces are stacked. This process is repeated until a virtual source gather has been determined for every real receiver location. Virtual source gathers can be processed with a standard reflection seismic processing flow to yield a reflection section. Improper static corrections can be detrimental to effective stacking, and determination of proper statics is often difficult in areas of significant contrast such as between basin and mountain areas. As such, a natural synergy exists between virtual source interferometry and modern industry reflection seismic processing, with its emphasis on detailed static correction and dense acquisition geometries.

  14. Seismic Imaging of UXO-Contaminated Underwater Sites (Interim Report)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gritto, Roland; Korneev, Valeri; Nihei, Kurt

    2004-11-30

    Finite difference modeling with 2-dimensional models were conducted to evaluate the performance of source-receiver arrays to locate UXO in littoral environments. The model parameters were taken from measurements in coastal areas with typical bay mud and from examples in the literature. Seismic arrays are well suited to focus energy by steering the elements of the array to any point in the medium that acts as an energy source. This principle also applies to seismic waves that are backscattered by buried UXO. The power of the array is particularly evident in strong noise conditions when the signal-to-noise ratio is too lowmore » to observe the scattered signal on the seismograms. Using a seismic array, it was possible to detect and locate the UXO with a reliability similar to noise free situations. When the UXO was positioned within 3-6 wavelengths of the incident signal from the source array, the resolution was good enough to determine the dimensions of the UXO from the scattered waves. Beyond this distance this distinction decreased gradually while the location and the center of the UXO were still determined reliably. The location and the dimensions of two adjacent UXO were resolved down to a separation of 1/3 of the dominant wavelength of the incident wave, at which time interference effects began to appear. In the investigated cases, the ability to locate a UXO was independent on the use of a model with a rippled or a flat seafloor, as long as the array was located above the UXO. Nevertheless, the correct parameters of the seafloor interface were obtained in these cases. An investigation to find the correct migration velocity in the sediments to locate the UXO revealed that a range of velocity gradients centered around the correct velocity model produced comparable results, which needs to be further investigated with physical modeling.« less

  15. Increasing seismicity in the U. S. midcontinent: Implications for earthquake hazard

    USGS Publications Warehouse

    Ellsworth, William L.; Llenos, Andrea L.; McGarr, Arthur F.; Michael, Andrew J.; Rubinstein, Justin L.; Mueller, Charles S.; Petersen, Mark D.; Calais, Eric

    2015-01-01

    Earthquake activity in parts of the central United States has increased dramatically in recent years. The space-time distribution of the increased seismicity, as well as numerous published case studies, indicates that the increase is of anthropogenic origin, principally driven by injection of wastewater coproduced with oil and gas from tight formations. Enhanced oil recovery and long-term production also contribute to seismicity at a few locations. Preliminary hazard models indicate that areas experiencing the highest rate of earthquakes in 2014 have a short-term (one-year) hazard comparable to or higher than the hazard in the source region of tectonic earthquakes in the New Madrid and Charleston seismic zones.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parra, J.; Collier, H.; Angstman, B.

    In low porosity, low permeability zones, natural fractures are the primary source of permeability which affect both production and injection of fluids. The open fractures do not contribute much to porosity, but they provide an increased drainage network to any porosity. An important approach to characterizing the fracture orientation and fracture permeability of reservoir formations is one based upon the effects of such conditions on the propagation of acoustic and seismic waves in the rock. We present the feasibility of using seismic measurement techniques to map the fracture zones between wells spaced 2400 ft at depths of about 1000 ft.more » For this purpose we constructed computer models (which include azimuthal anisotropy) using Lodgepole reservoir parameters to predict seismic signatures recorded at the borehole scale, crosswell scale, and 3 D seismic scale. We have integrated well logs with existing 2D surfaces seismic to produce petrophysical and geological cross sections to determine the reservoir parameters and geometry for the computer models. In particular, the model responses are used to evaluate if surface seismic and crosswell seismic measurements can capture the anisotropy due to vertical fractures. Preliminary results suggested that seismic waves transmitted between two wells will propagate in carbonate fracture reservoirs, and the signal can be received above the noise level at the distance of 2400 ft. In addition, the large velocities contrast between the main fracture zone and the underlying unfractured Boundary Ridge Member, suggested that borehole reflection imaging may be appropriate to map and fracture zone thickness variation and fracture distributions in the reservoir.« less

  17. Geodetic Measurements and Numerical Modeling of the Deformation Cycle for Okmok Volcano, Alaska: 1993-2008

    NASA Astrophysics Data System (ADS)

    Ohlendorf, S. J.; Feigl, K.; Thurber, C. H.; Lu, Z.; Masterlark, T.

    2011-12-01

    Okmok Volcano is an active caldera located on Umnak Island in the Aleutian Island arc. Okmok, having recently erupted in 1997 and 2008, is well suited for multidisciplinary studies of magma migration and storage because it hosts a good seismic network and has been the subject of synthetic aperture radar (SAR) images that span the recent eruption cycle. Interferometric SAR can characterize surface deformation in space and time, while data from the seismic network provides important information about the interior processes and structure of the volcano. We conduct a complete time series analysis of deformation of Okmok with images collected by the ERS and Envisat satellites on more than 100 distinct epochs between 1993 and 2008. We look for changes in inter-eruption inflation rates, which may indicate inelastic rheologic effects. For the time series analysis, we analyze the gradient of phase directly, without unwrapping, using the General Inversion of Phase Technique (GIPhT) [Feigl and Thurber, 2009]. This approach accounts for orbital and atmospheric effects and provides realistic estimates of the uncertainties of the model parameters. We consider several models for the source, including the prolate spheroid model and the Mogi model, to explain the observed deformation. Using a medium that is a homogeneous half space, we estimate the source depth to be centered at about 4 km below sea level, consistent with the findings of Masterlark et al. [2010]. As in several other geodetic studies, we find the source to be approximately centered beneath the caldera. To account for rheologic complexity, we next apply the Finite Element Method to simulate a pressurized cavity embedded in a medium with material properties derived from body wave seismic tomography. This approach allows us to address the problem of unreasonably large pressure values implied by a Mogi source with a radius of about 1 km by experimenting with larger sources. We also compare the time dependence of the source to published results that used GPS data.

  18. Parametric Studies for Scenario Earthquakes: Site Effects and Differential Motion

    NASA Astrophysics Data System (ADS)

    Panza, G. F.; Panza, G. F.; Romanelli, F.

    2001-12-01

    In presence of strong lateral heterogeneities, the generation of local surface waves and local resonance can give rise to a complicated pattern in the spatial groundshaking scenario. For any object of the built environment with dimensions greater than the characteristic length of the ground motion, different parts of its foundations can experience severe non-synchronous seismic input. In order to perform an accurate estimate of the site effects, and of differential motion, in realistic geometries, it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models, allows us the construction of damage scenarios that are out of reach of stochastic models. Synthetic signals, to be used as seismic input in a subsequent engineering analysis, e.g. for the design of earthquake-resistant structures or for the estimation of differential motion, can be produced at a very low cost/benefit ratio. We illustrate the work done in the framework of a large international cooperation following the guidelines of the UNESCO IUGS IGCP Project 414 "Realistic Modeling of Seismic Input for Megacities and Large Urban Areas" and show the very recent numerical experiments carried out within the EC project "Advanced methods for assessing the seismic vulnerability of existing motorway bridges" (VAB) to assess the importance of non-synchronous seismic excitation of long structures. >http://www.ictp.trieste.it/www_users/sand/projects.html

  19. Laboratory investigations of seismicity caused by iceberg calving and capsize

    NASA Astrophysics Data System (ADS)

    Cathles, L. M. M., IV; Kaluzienski, L. M.; Burton, J. C.

    2015-12-01

    The calving and capsize of cubic kilometer-sized icebergs in both Greenland and Antarctica are known to be the source of long-period seismic events classified as glacial earthquakes. The ability to monitor both calving events and the mass of ice calved using the Global Seismographic Network is quite attractive, however, the basic physics of these large calving events must be understood to develop a robust relationship between seismic magnitude and mass of ice calved. The amplitude and duration of the seismic signal is expected to be related to the mass of the calved iceberg and the magnitude of the acceleration of the iceberg's center of mass, yet a simple relationship between these quantities has proved difficult to develop from in situ observations or numerical models. To address this, we developed and carried out a set of experiments on a laboratory scale model of iceberg calving. These experiments were designed to measure several aspects of the post-fracture calving process. Our results show that a combination of mechanical contact forces and hydrodynamic pressure forces are generated by the capsize of an iceberg adjacent to a glacier's terminus. These forces combine to produce the net horizontal centroid single force (CSF) which is often used to model glacial earthquake sources. We find that although the amplitude and duration of the force applied to the terminus generally increases with the iceberg mass, the details depend on the geometry of the iceberg and the depth of the water. The resulting seismic signal is thus crucially dependent on hydrodynamics of the capsize process.

  20. Method for enhancing low frequency output of impulsive type seismic energy sources and its application to a seismic energy source for use while drilling

    DOEpatents

    Radtke, Robert P; Stokes, Robert H; Glowka, David A

    2014-12-02

    A method for operating an impulsive type seismic energy source in a firing sequence having at least two actuations for each seismic impulse to be generated by the source. The actuations have a time delay between them related to a selected energy frequency peak of the source output. One example of the method is used for generating seismic signals in a wellbore and includes discharging electric current through a spark gap disposed in the wellbore in at least one firing sequence. The sequence includes at least two actuations of the spark gap separated by an amount of time selected to cause acoustic energy resulting from the actuations to have peak amplitude at a selected frequency.

  1. High-Resolution Analysis of Seismicity Induced at Berlín Geothermal Field, El Salvador

    NASA Astrophysics Data System (ADS)

    Kwiatek, G.; Bulut, F.; Dresen, G. H.; Bohnhoff, M.

    2012-12-01

    We investigate induced microseismic activity monitored at Berlín Geothermal Field, El Salvador, during a hydraulic stimulation. The site was monitored for a time period of 17 months using thirteen 3-component seismic stations located in shallow boreholes. Three stimulations were performed in the well TR8A with a maximum injection rate and well head pressure of 160l/s and 130bar, respectively. For the entire time period of our analysis, the acquisition system recorded 581 events with moment magnitudes ranging between -0.5 and 3.7. The initial seismic catalog provided by the operator was substantially improved: 1) We re-picked P- and S-wave onsets and relocated the seismic events using the double-difference relocation algorithm based on cross-correlation derived differential arrival time data. Forward modeling was performed using a local 1D velocity model instead of homogeneous full-space. 2) We recalculated source parameters using the spectral fitting method and refined the results applying the spectral ratio method. We investigated the source parameters and spatial and temporal changes of the seismic activity based on the refined dataset and studied the correlation between seismic activity and production. The achieved hypocentral precision allowed resolving the spatiotemporal changes in seismic activity down to a scale of a few meters. The application of spectral ratio method significantly improved the quality of source parameters in a high-attenuating and complex geological environment. Of special interest is the largest event (Mw3.7) and its nucleation process. We investigate whether the refined seismic data display any signatures that the largest event is triggered by the shut-in of the well. We found seismic activity displaying clear spatial and temporal patterns that could be easily related to the amount of water injected into the well TR8A and other reinjection wells in the investigated area. The migration of seismicity outside of injection point is observed while injection rate is increasing. The locations of migrating seismic events are related to the existing fault system that is independently supported by calculated focal mechanisms. We found that the event migration occurs until the shut-in of the well. We observe that the large magnitude events are observed right after the shut-in, located in undamaged parts of the fault system. Results show that the following stimulation episodes require increased injection rate level (or increased well head pressure) to re-activate the seismic activity (Kaiser Effect, "Crustal memory" effect). The static stress drop values increase with the distance from injection point that is interpreted to be related to pore pressure perturbations introduced by stimulation of the injection well.

  2. Improved phase arrival estimate and location for local earthquakes in South Korea

    NASA Astrophysics Data System (ADS)

    Morton, E. A.; Rowe, C. A.; Begnaud, M. L.

    2012-12-01

    The Korean Institute of Geoscience and Mineral Resources (KIGAM) and the Korean Meteorological Agency (KMA) regularly report local (distance < ~1200 km) seismicity recorded with their networks; we obtain preliminary event location estimates as well as waveform data, but no phase arrivals are reported, so the data are not immediately useful for earthquake location. Our goal is to identify seismic events that are sufficiently well-located to provide accurate seismic travel-time information for events within the KIGAM and KMA networks, and also recorded by some regional stations. Toward that end, we are using a combination of manual phase identification and arrival-time picking, with waveform cross-correlation, to cluster events that have occurred in close proximity to one another, which allows for improved phase identification by comparing the highly correlating waveforms. We cross-correlate the known events with one another on 5 seismic stations and cluster events that correlate above a correlation coefficient threshold of 0.7, which reveals few clusters containing few events each. The small number of repeating events suggests that the online catalogs have had mining and quarry blasts removed before publication, as these can contribute significantly to repeating seismic sources in relatively aseismic regions such as South Korea. The dispersed source locations in our catalog, however, are ideal for seismic velocity modeling by providing superior sampling through the dense seismic station arrangement, which produces favorable event-to-station ray path coverage. Following careful manual phase picking on 104 events chosen to provide adequate ray coverage, we re-locate the events to obtain improved source coordinates. The re-located events are used with Thurber's Simul2000 pseudo-bending local tomography code to estimate the crustal structure on the Korean Peninsula, which is an important contribution to ongoing calibration for events of interest in the region.

  3. The Effects of Heterogeneities on Seismic Wave Propagation in the Climax Stock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagan Webb, C., Snelson, C. M., White, R., Emmitt, R., Barker, D., Abbott, R., Bonal, N.

    2011-12-01

    The Comprehensive Nuclear Test-Ban Treaty requires the ability to detect low-yield (less than 150kton) nuclear events. This kind of monitoring can only be done seismically on a regional scale (within 2000km). At this level, it is difficult to distinguish between low-yield nuclear events and non-nuclear events of similar magnitude. In order to confidently identify a nuclear event, a more detailed understanding of nuclear seismic sources is needed. In particular, it is important to know the effects of local geology on the seismic signal. This study focuses on P-wave velocity in heterogeneous granitoid. The Source Physics Experiment (SPE) is currently performingmore » low-yield tests with chemical explosives at the Nevada National Security Site (NNSS). The exact test site was chosen to be in the Climax Stock, a cretaceous granodiorite and quartz-monzonite pluton located in Area 15 of the NNSS. It has been used in the past for the Hard Hat and Pile Driver nuclear tests, which provided legacy data that can be used to simulate wave propagation. The Climax Stock was originally chosen as the site of the SPE partly because of its assumed homogeneity. It has since been discovered that the area of the stock where the SPE tests are being performed contains a perched water table. In addition, the stock is known to contain an extensive network of faults, joints, and fractures, but the exact effect of these structural features on seismic wave velocity is not fully understood. The SPE tests are designed to seismically capture the explosion phenomena from the near- to the far-field transition of the seismic waveform. In the first SPE experiment, 100kg of chemical explosives were set off at a depth of 55m. The blast was recorded with an array of sensors and diagnostics, including accelerometers, geophones, rotational sensors, short-period and broadband seismic sensors, Continuous Reflectometry for Radius vs. Time Experiment, Time of Arrival, Velocity of Detonation, and infrasound sensors. The focus of this study is two-fold: (1) the geophone array that was focused over the SPE shot and (2) a high-resolution seismic profile that was recently acquired at the field site. The geophone array was placed radially around the SPE shot in five directions with 100m spacing and out to a distance of 2 km. The high-resolution profile was about 475m in length with station and shot spacing of 5m using a 7000lb mini-vibe as a source. In both data sets, the first arrivals will be used to develop velocity models. For the geophone array, 1-D P-wave velocity models will be developed to determine an average apparent velocity of the Climax Stock. The high-resolution data will be used to develop a 2-D P-wave velocity model along the seismic profile. This is in an effort to elucidate the water table in more detail and provide additional information on the near-surface structure. These results will be used in the overall modeling effort to fully characterize the test bed and develop a physics-based model to simulate seismic energy from the SPE events.« less

  4. Uncertainties in Earthquake Loss Analysis: A Case Study From Southern California

    NASA Astrophysics Data System (ADS)

    Mahdyiar, M.; Guin, J.

    2005-12-01

    Probabilistic earthquake hazard and loss analyses play important roles in many areas of risk management, including earthquake related public policy and insurance ratemaking. Rigorous loss estimation for portfolios of properties is difficult since there are various types of uncertainties in all aspects of modeling and analysis. It is the objective of this study to investigate the sensitivity of earthquake loss estimation to uncertainties in regional seismicity, earthquake source parameters, ground motions, and sites' spatial correlation on typical property portfolios in Southern California. Southern California is an attractive region for such a study because it has a large population concentration exposed to significant levels of seismic hazard. During the last decade, there have been several comprehensive studies of most regional faults and seismogenic sources. There have also been detailed studies on regional ground motion attenuations and regional and local site responses to ground motions. This information has been used by engineering seismologists to conduct regional seismic hazard and risk analysis on a routine basis. However, one of the more difficult tasks in such studies is the proper incorporation of uncertainties in the analysis. From the hazard side, there are uncertainties in the magnitudes, rates and mechanisms of the seismic sources and local site conditions and ground motion site amplifications. From the vulnerability side, there are considerable uncertainties in estimating the state of damage of buildings under different earthquake ground motions. From an analytical side, there are challenges in capturing the spatial correlation of ground motions and building damage, and integrating thousands of loss distribution curves with different degrees of correlation. In this paper we propose to address some of these issues by conducting loss analyses of a typical small portfolio in southern California, taking into consideration various source and ground motion uncertainties. The approach is designed to integrate loss distribution functions with different degrees of correlation for portfolio analysis. The analysis is based on USGS 2002 regional seismicity model.

  5. Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

    DOE PAGES

    Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; ...

    2017-10-17

    In this paper we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach ismore » used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.« less

  6. Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan

    In this paper we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach ismore » used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.« less

  7. Seismic Hazard Maps for Seattle, Washington, Incorporating 3D Sedimentary Basin Effects, Nonlinear Site Response, and Rupture Directivity

    USGS Publications Warehouse

    Frankel, Arthur D.; Stephenson, William J.; Carver, David L.; Williams, Robert A.; Odum, Jack K.; Rhea, Susan

    2007-01-01

    This report presents probabilistic seismic hazard maps for Seattle, Washington, based on over 500 3D simulations of ground motions from scenario earthquakes. These maps include 3D sedimentary basin effects and rupture directivity. Nonlinear site response for soft-soil sites of fill and alluvium was also applied in the maps. The report describes the methodology for incorporating source and site dependent amplification factors into a probabilistic seismic hazard calculation. 3D simulations were conducted for the various earthquake sources that can affect Seattle: Seattle fault zone, Cascadia subduction zone, South Whidbey Island fault, and background shallow and deep earthquakes. The maps presented in this document used essentially the same set of faults and distributed-earthquake sources as in the 2002 national seismic hazard maps. The 3D velocity model utilized in the simulations was validated by modeling the amplitudes and waveforms of observed seismograms from five earthquakes in the region, including the 2001 M6.8 Nisqually earthquake. The probabilistic seismic hazard maps presented here depict 1 Hz response spectral accelerations with 10%, 5%, and 2% probabilities of exceedance in 50 years. The maps are based on determinations of seismic hazard for 7236 sites with a spacing of 280 m. The maps show that the most hazardous locations for this frequency band (around 1 Hz) are soft-soil sites (fill and alluvium) within the Seattle basin and along the inferred trace of the frontal fault of the Seattle fault zone. The next highest hazard is typically found for soft-soil sites in the Duwamish Valley south of the Seattle basin. In general, stiff-soil sites in the Seattle basin exhibit higher hazard than stiff-soil sites outside the basin. Sites with shallow bedrock outside the Seattle basin have the lowest estimated hazard for this frequency band.

  8. Gas and gas hydrate distribution around seafloor seeps in Mississippi Canyon, Northern Gulf of Mexico, using multi-resolution seismic imagery

    USGS Publications Warehouse

    Wood, W.T.; Hart, P.E.; Hutchinson, D.R.; Dutta, N.; Snyder, F.; Coffin, R.B.; Gettrust, J.F.

    2008-01-01

    To determine the impact of seeps and focused flow on the occurrence of shallow gas hydrates, several seafloor mounds in the Atwater Valley lease area of the Gulf of Mexico were surveyed with a wide range of seismic frequencies. Seismic data were acquired with a deep-towed, Helmholz resonator source (220-820 Hz); a high-resolution, Generator-Injector air-gun (30-300 Hz); and an industrial air-gun array (10-130 Hz). Each showed a significantly different response in this weakly reflective, highly faulted area. Seismic modeling and observations of reversed-polarity reflections and small scale diffractions are consistent with a model of methane transport dominated regionally by diffusion but punctuated by intense upward advection responsible for the bathymetric mounds, as well as likely advection along pervasive filamentous fractures away from the mounds.

  9. Gas Hydrate Petroleum System Modeling in western Nankai Trough Area

    NASA Astrophysics Data System (ADS)

    Tanaka, M.; Aung, T. T.; Fujii, T.; Wada, N.; Komatsu, Y.

    2017-12-01

    Since 2003, we have been conducting Gas Hydrate (GH) petroleum system models covering the eastern Nankai Trough, Japan, and results of resource potential from regional model shows good match with the value depicted from seismic and log data. In this year, we have applied this method to explore GH potential in study area. In our study area, GH prospects have been identified with aid of bottom simulating reflector (BSR) and presence of high velocity anomalies above the BSR interpreted based on 3D migration seismic and high density velocity cubes. In order to understand the pathway of biogenic methane from source to GH prospects 1D-2D-3D GH petroleum system models are built and investigated. This study comprises lower Miocene to Pleistocene, deep to shallow marine sedimentary successions of Pliocene and Pleistocene layers overlain the basement. The BSR were interpreted in Pliocene and Pleistocene layers. Based on 6 interpreted sequence boundaries from 3D migration seismic and velocity data, construction of a depth 3D framework model is made and distributed by a conceptual submarine fan depositional facies model derived from seismic facies analysis and referring existing geological report. 1D models are created to analyze lithology sensitivity to temperature and vitrinite data from an exploratory well drilled in the vicinity of study area. The PSM parameters are applied in 2D and 3D modeling and simulation. Existing report of the explanatory well reveals that thermogenic origin are considered to exist. For this reason, simulation scenarios including source formations for both biogenic and thermogenic reaction models are also investigated. Simulation results reveal lower boundary of GH saturation zone at pseudo wells has been simulated with sensitivity of a few tens of meters in comparing with interpreted BSR. From sensitivity analysis, simulated temperature was controlled by different peak generation temperature models and geochemical parameters. Progressive folding and updipping layers including paleostructure can effectively assist biogenic gas migration to upward. Biogenic and Thermogenic mixing model shows that kitchen center only has a potential for generating thermogenic hydrocarbon. Our Prospect based on seismic interpretation is consistent with high GH saturation area based on 3D modeling results.

  10. The Temblor mobile seismic risk app, v2: Rapid and seamless earthquake information to inspire individuals to recognize and reduce their risk

    NASA Astrophysics Data System (ADS)

    Stein, R. S.; Sevilgen, V.; Sevilgen, S.; Kim, A.; Jacobson, D. S.; Lotto, G. C.; Ely, G.; Bhattacharjee, G.; O'Sullivan, J.

    2017-12-01

    Temblor quantifies and personalizes earthquake risk and offers solutions by connecting users with qualified retrofit and insurance providers. Temblor's daily blog on current earthquakes, seismic swarms, eruptions, floods, and landslides makes the science accessible to the public. Temblor is available on iPhone, Android, and mobile web app platforms (http://temblor.net). The app presents both scenario (worst case) and probabilistic (most likely) financial losses for homes and commercial buildings, and estimates the impact of seismic retrofit and insurance on the losses and safety. Temblor's map interface has clickable earthquakes (with source parameters and links) and active faults (name, type, and slip rate) around the world, and layers for liquefaction, landslides, tsunami inundation, and flood zones in the U.S. The app draws from the 2014 USGS National Seismic Hazard Model and the 2014 USGS Building Seismic Safety Council ShakeMap scenari0 database. The Global Earthquake Activity Rate (GEAR) model is used worldwide, with active faults displayed in 75 countries. The Temblor real-time global catalog is merged from global and national catalogs, with aftershocks discriminated from mainshocks. Earthquake notifications are issued to Temblor users within 30 seconds of their occurrence, with approximate locations and magnitudes that are rapidly refined in the ensuing minutes. Launched in 2015, Temblor has 650,000 unique users, including 250,000 in the U.S. and 110,000 in Chile, as well as 52,000 Facebook followers. All data shown in Temblor is gathered from authoritative or published sources and is synthesized to be intuitive and actionable to the public. Principal data sources include USGS, FEMA, EMSC, GEM Foundation, NOAA, GNS Science (New Zealand), INGV (Italy), PHIVOLCS (Philippines), GSJ (Japan), Taiwan Earthquake Model, EOS Singapore (Southeast Asia), MTA (Turkey), PB2003 (plate boundaries), CICESE (Baja California), California Geological Survey, and 20 other state geological surveys and county agencies.

  11. Toward Broadband Source Modeling for the Himalayan Collision Zone

    NASA Astrophysics Data System (ADS)

    Miyake, H.; Koketsu, K.; Kobayashi, H.; Sharma, B.; Mishra, O. P.; Yokoi, T.; Hayashida, T.; Bhattarai, M.; Sapkota, S. N.

    2017-12-01

    The Himalayan collision zone is characterized by the significant tectonic setting. There are earthquakes with low-angle thrust faulting as well as continental outerrise earthquakes. Recently several historical earthquakes have been identified by active fault surveys [e.g., Sapkota et al., 2013]. We here investigate source scaling for the Himalayan collision zone as a fundamental factor to construct source models toward seismic hazard assessment. As for the source scaling for collision zones, Yen and Ma [2011] reported the subduction-zone source scaling in Taiwan, and pointed out the non-self-similar scaling due to the finite crustal thickness. On the other hand, current global analyses of stress drop do not show abnormal values for the continental collision zones [e.g., Allmann and Shearer, 2009]. Based on the compile profiling of finite thickness of the curst and dip angle variations, we discuss whether the bending exists for the Himalayan source scaling and implications on stress drop that will control strong ground motions. Due to quite low-angle dip faulting, recent earthquakes in the Himalayan collision zone showed the upper bound of the current source scaling of rupture area vs. seismic moment (< Mw 8.0), and does not show significant bending of the source scaling. Toward broadband source modeling for ground motion prediction, we perform empirical Green's function simulations for the 2009 Butan and 2015 Gorkha earthquake sequence to quantify both long- and short-period source spectral levels.

  12. The information content of high-frequency seismograms and the near-surface geologic structure of "hard rock" recording sites

    USGS Publications Warehouse

    Cranswick, E.

    1988-01-01

    Due to hardware developments in the last decade, the high-frequency end of the frequency band of seismic waves analyzed for source mechanisms has been extended into the audio-frequency range (>20 Hz). In principle, the short wavelengths corresponding to these frequencies can provide information about the details of seismic sources, but in fact, much of the "signal" is the site response of the nearsurface. Several examples of waveform data recorded at "hard rock" sites, which are generally assumed to have a "flat" transfer function, are presented to demonstrate the severe signal distortions, including fmax, produced by near-surface structures. Analysis of the geology of a number of sites indicates that the overall attenuation of high-frequency (>1 Hz) seismic waves is controlled by the whole-path-Q between source and receiver but the presence of distinct fmax site resonance peaks is controlled by the nature of the surface layer and the underlying near-surface structure. Models of vertical decoupling of the surface and nearsurface and horizontal decoupling of adjacent sites on hard rock outcrops are proposed and their behaviour is compared to the observations of hard rock site response. The upper bound to the frequency band of the seismic waves that contain significant source information which can be deconvolved from a site response or an array response is discussed in terms of fmax and the correlation of waveform distortion with the outcrop-scale geologic structure of hard rock sites. It is concluded that although the velocity structures of hard rock sites, unlike those of alluvium sites, allow some audio-frequency seismic energy to propagate to the surface, the resulting signals are a highly distorted, limited subset of the source spectra. ?? 1988 Birkha??user Verlag.

  13. Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling

    NASA Astrophysics Data System (ADS)

    Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea

    2017-12-01

    The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.

  14. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic model we derived the uniform upper magnitude limit from the special catalog and assumed local a-values proportional to maximum horizontal strain rate. In prospective tests the geodetic model agrees well with earthquake occurrence. The smoothed seismicity model performs best of the four models.

  15. Benefits of rotational ground motions for planetary seismology

    NASA Astrophysics Data System (ADS)

    Donner, S.; Joshi, R.; Hadziioannou, C.; Nunn, C.; van Driel, M.; Schmelzbach, C.; Wassermann, J. M.; Igel, H.

    2017-12-01

    Exploring the internal structure of planetary objects is fundamental to understand the evolution of our solar system. In contrast to Earth, planetary seismology is hampered by the limited number of stations available, often just a single one. Classic seismology is based on the measurement of three components of translational ground motion. Its methods are mainly developed for a larger number of available stations. Therefore, the application of classical seismological methods to other planets is very limited. Here, we show that the additional measurement of three components of rotational ground motion could substantially improve the situation. From sparse or single station networks measuring translational and rotational ground motions it is possible to obtain additional information on structure and source. This includes direct information on local subsurface seismic velocities, separation of seismic phases, propagation direction of seismic energy, crustal scattering properties, as well as moment tensor source parameters for regional sources. The potential of this methodology will be highlighted through synthetic forward and inverse modeling experiments.

  16. Source characteristics of 2000 small earthquakes nucleating on the Alto Tiberina fault system (central Italy).

    NASA Astrophysics Data System (ADS)

    Munafo, I.; Malagnini, L.; Tinti, E.; Chiaraluce, L.; Di Stefano, R.; Valoroso, L.

    2014-12-01

    The Alto Tiberina Fault (ATF) is a 60 km long east-dipping low-angle normal fault, located in a sector of the Northern Apennines (Italy) undergoing active extension since the Quaternary. The ATF has been imaged by analyzing the active source seismic reflection profiles, and the instrumentally recorded persistent background seismicity. The present study is an attempt to separate the contributions of source, site, and crustal attenuation, in order to focus on the mechanics of the seismic sources on the ATF, as well on the synthetic and the antithetic structures within the ATF hanging-wall (i.e. Colfiorito fault, Gubbio fault and Umbria Valley fault). In order to compute source spectra, we perform a set of regressions over the seismograms of 2000 small earthquakes (-0.8 < ML< 4) recorded between 2010 and 2014 at 50 permanent seismic stations deployed in the framework of the Alto Tiberina Near Fault Observatory project (TABOO) and equipped with three-components seismometers, three of which located in shallow boreholes. Because we deal with some very small earthquakes, we maximize the signal to noise ratio (SNR) with a technique based on the analysis of peak values of bandpass-filtered time histories, in addition to the same processing performed on Fourier amplitudes. We rely on a tool called Random Vibration Theory (RVT) to completely switch from peak values in the time domain to Fourier spectral amplitudes. Low-frequency spectral plateau of the source terms are used to compute moment magnitudes (Mw) of all the events, whereas a source spectral ratio technique is used to estimate the corner frequencies (Brune spectral model) of a subset of events chosen over the analysis of the noise affecting the spectral ratios. So far, the described approach provides high accuracy over the spectral parameters of earthquakes of localized seismicity, and may be used to gain insights into the underlying mechanics of faulting and the earthquake processes.

  17. Source Model of the MJMA 6.5 Plate-Boundary Earthquake at the Nankai Trough, Southwest Japan, on April 1, 2016, Based on Strong Motion Waveform Modeling

    NASA Astrophysics Data System (ADS)

    Asano, K.

    2017-12-01

    An MJMA 6.5 earthquake occurred offshore the Kii peninsula, southwest Japan on April 1, 2016. This event was interpreted as a thrust-event on the plate-boundary along the Nankai trough where (Wallace et al., 2016). This event is the largest plate-boundary earthquake in the source region of the 1944 Tonankai earthquake (MW 8.0) after that event. The significant point of this event regarding to seismic observation is that this event occurred beneath an ocean-bottom seismic network called DONET1, which is jointly operated by NIED and JAMSTEC. Since moderate-to-large earthquake of this focal type is very rare in this region in the last half century, it is a good opportunity to investigate the source characteristics relating to strong motion generation of subduction-zone plate-boundary earthquakes along the Nankai trough. Knowledge obtained from the study of this earthquake would contribute to ground motion prediction and seismic hazard assessment for future megathrust earthquakes expected in the Nankai trough. In this study, the source model of the 2016 offshore the Kii peninsula earthquake was estimated by broadband strong motion waveform modeling using the empirical Green's function method (Irikura, 1986). The source model is characterized by strong motion generation area (SMGA) (Miyake et al., 2003), which is defined as a rectangular area with high-stress drop or high slip-velocity. SMGA source model based on the empirical Green's function method has great potential to reproduce ground motion time history in broadband frequency range. We used strong motion data from offshore stations (DONET1 and LTBMS) and onshore stations (NIED F-net and DPRI). The records of an MJMA 3.2 aftershock at 13:04 on April 1, 2016 were selected for the empirical Green's functions. The source parameters of SMGA are optimized by the waveform modeling in the frequency range 0.4-10 Hz. The best estimate of SMGA size is 19.4 km2, and SMGA of this event does not follow the source scaling relationship for past plate-boundary earthquakes along the Japan trench, northeast Japan. This finding implies that the source characteristics of plate-boundary events in the Nankai trough are different from those in the Japan Trench, and it could be important information to consider regional variation in ground motion prediction.

  18. Nuclear Explosion Monitoring Advances and Challenges

    NASA Astrophysics Data System (ADS)

    Baker, G. E.

    2015-12-01

    We address the state-of-the-art in areas important to monitoring, current challenges, specific efforts that illustrate approaches addressing shortcomings in capabilities, and additional approaches that might be helpful. The exponential increase in the number of events that must be screened as magnitude thresholds decrease presents one of the greatest challenges. Ongoing efforts to exploit repeat seismic events using waveform correlation, subspace methods, and empirical matched field processing holds as much "game-changing" promise as anything being done, and further efforts to develop and apply such methods efficiently are critical. Greater accuracy of travel time, signal loss, and full waveform predictions are still needed to better locate and discriminate seismic events. Important developments include methods to model velocities using multiple types of data; to model attenuation with better separation of source, path, and site effects; and to model focusing and defocusing of surface waves. Current efforts to model higher frequency full waveforms are likely to improve source characterization while more effective estimation of attenuation from ambient noise holds promise for filling in gaps. Censoring in attenuation modeling is a critical problem to address. Quantifying uncertainty of discriminants is key to their operational use. Efforts to do so for moment tensor (MT) inversion are particularly important, and fundamental progress on the statistics of MT distributions is the most important advance needed in the near term in this area. Source physics is seeing great progress through theoretical, experimental, and simulation studies. The biggest need is to accurately predict the effects of source conditions on seismic generation. Uniqueness is the challenge here. Progress will depend on studies that probe what distinguishes mechanisms, rather than whether one of many possible mechanisms is consistent with some set of observations.

  19. Relationship between eruption plume heights and seismic source amplitudes of eruption tremors and explosion events

    NASA Astrophysics Data System (ADS)

    Mori, A.; Kumagai, H.

    2016-12-01

    It is crucial to analyze and interpret eruption tremors and explosion events for estimating eruption size and understanding eruption phenomena. Kumagai et al. (EPS, 2015) estimated the seismic source amplitudes (As) and cumulative source amplitudes (Is) for eruption tremors and explosion events at Tungurahua, Ecuador, by the amplitude source location (ASL) method based on the assumption of isotropic S-wave radiation in a high-frequency band (5-10 Hz). They found scaling relations between As and Is for eruption tremors and explosion events. However, the universality of these relations is yet to be verified, and the physical meanings of As and Is are not clear. In this study, we analyzed the relations between As and Is for eruption tremors and explosion events at active volcanoes in Japan, and estimated As and Is by the ASL method. We obtained power-law relations between As and Is, in which the powers were different between eruption tremors and explosion events. These relations were consistent with the scaling relations at Tungurahua volcano. Then, we compared As with maximum eruption plume heights (H) during eruption tremors analyzed in this study, and found that H was proportional to 0.21 power of As. This relation is similar to the plume height model based on the physical process of plume rise, which indicates that H is proportional to 0.25 power of volumetric flow rate for plinian eruptions. This suggests that As may correspond to volumetric flow rate. If we assume a seismic source with volume changes and far-field S-wave, As is proportional to the source volume rate. This proportional relation and the plume height model give rise to the relation that H is proportional to 0.25 power of As. These results suggest that we may be able to estimate plume heights in realtime by estimating As during eruptions from seismic observations.

  20. A rapid estimation of tsunami run-up based on finite fault models

    NASA Astrophysics Data System (ADS)

    Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.

    2014-12-01

    Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.

  1. Apollo 14 and 16 Active Seismic Experiments, and Apollo 17 Lunar Seismic Profiling

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Seismic refraction experiments were conducted on the moon by Apollo astronauts during missions 14, 16, and 17. Seismic velocities of 104, 108, 92, 114 and 100 m/sec were inferred for the lunar regolith at the Apollo 12, 14, 15, 16, and 17 landing sites, respectively. These data indicate that fragmentation and comminution caused by meteoroid impacts has produced a layer of remarkably uniform seismic properties moonwide. Brecciation and high porosity are the probable causes of the very low velocities observed in the lunar regolith. Apollo 17 seismic data revealed that the seismic velocity increases very rapidly with depth to 4.7 km/sec at a depth of 1.4 km. Such a large velocity change is suggestive of compositional and textural changes and is compatible with a model of fractured basaltic flows overlying anorthositic breccias. 'Thermal' moonquakes were also detected at the Apollo 17 site, becoming increasingly frequent after sunrise and reaching a maximum at sunset. The source of these quakes could possibly be landsliding.

  2. Seismic hazard in the Intermountain West

    USGS Publications Warehouse

    Haller, Kathleen; Moschetti, Morgan P.; Mueller, Charles; Rezaeian, Sanaz; Petersen, Mark D.; Zeng, Yuehua

    2015-01-01

    The 2014 national seismic-hazard model for the conterminous United States incorporates new scientific results and important model adjustments. The current model includes updates to the historical catalog, which is spatially smoothed using both fixed-length and adaptive-length smoothing kernels. Fault-source characterization improved by adding faults, revising rates of activity, and incorporating new results from combined inversions of geologic and geodetic data. The update also includes a new suite of published ground motion models. Changes in probabilistic ground motion are generally less than 10% in most of the Intermountain West compared to the prior assessment, and ground-motion hazard in four Intermountain West cities illustrates the range and magnitude of change in the region. Seismic hazard at reference sites in Boise and Reno increased as much as 10%, whereas hazard in Salt Lake City decreased 5–6%. The largest change was in Las Vegas, where hazard increased 32–35%.

  3. Applying the seismic interferometry method to vertical seismic profile data using tunnel excavation noise as source

    NASA Astrophysics Data System (ADS)

    Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos

    2013-04-01

    In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry method was applied to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical methods but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry method (Claerbout 1968) was successfully applied to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was applied simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We applied the relationship between the transmission response and the reflection response for a 1D multilayer structure, and next 3D approach (Wapenaar 2004). As a result of this seismic interferometry experiment the 3D reflectivity model (frequencies and resolution ranges) was obtained. We proved also that the seismic interferometry approach can be applied in asynchronous seismic auscultation. The reflections detected in the virtual seismic sections are in agreement with the geological features encountered during the excavation of the tunnel and also with the petrophysical properties and parameters measured in previous geophysical borehole logging. References Claerbout J.F., 1968. Synthesis of a layered medium from its acoustic transmision response. Geophysics, 33, 264-269 Flavio Poletto, Piero Corubolo and Paolo Comeli.2010. Drill-bit seismic interferometry whith and whitout pilot signals. Geophysical Prospecting, 2010, 58, 257-265. Wapenaar, K., J. Thorbecke, and D. Draganov, 2004, Relations between reflection and transmission responses of three-dimensional inhomogeneous media: Geophysical Journal International, 156, 179-194.

  4. Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚ E, 29-31˚ N)

    NASA Astrophysics Data System (ADS)

    Askari, M.; Ney, Beh

    2009-04-01

    Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚E, 29-31˚N) Mina Askari, Behnoosh Neyestani Students of Science and Research University,Iran. Deterministic seismic hazard assessment has been performed in Center-East IRAN, including Kerman and adjacent regions of 100km is selected. A catalogue of earthquakes in the region, including historical earthquakes and instrumental earthquakes is provided. A total of 25 potential seismic source zones in the region delineated as area sources for seismic hazard assessment based on geological, seismological and geophysical information, then minimum distance for every seismic sources until site (Kerman) and maximum magnitude for each source have been determined, eventually using the N. A. ABRAHAMSON and J. J. LITEHISER '1989 attenuation relationship, maximum acceleration is estimated to be 0.38g, that is related to the movement of blind fault with maximum magnitude of this source is Ms=5.5.

  5. Probabilistic seismic hazard analysis (PSHA) for Ethiopia and the neighboring region

    NASA Astrophysics Data System (ADS)

    Ayele, Atalay

    2017-10-01

    Seismic hazard calculation is carried out for the Horn of Africa region (0°-20° N and 30°-50°E) based on the probabilistic seismic hazard analysis (PSHA) method. The earthquakes catalogue data obtained from different sources were compiled, homogenized to Mw magnitude scale and declustered to remove the dependent events as required by Poisson earthquake source model. The seismotectonic map of the study area that avails from recent studies is used for area sources zonation. For assessing the seismic hazard, the study area was divided into small grids of size 0.5° × 0.5°, and the hazard parameters were calculated at the center of each of these grid cells by considering contributions from all seismic sources. Peak Ground Acceleration (PGA) corresponding to 10% and 2% probability of exceedance in 50 years were calculated for all the grid points using generic rock site with Vs = 760 m/s. Obtained values vary from 0.0 to 0.18 g and 0.0-0.35 g for 475 and 2475 return periods, respectively. The corresponding contour maps showing the spatial variation of PGA values for the two return periods are presented here. Uniform hazard response spectrum (UHRS) for 10% and 2% probability of exceedance in 50 years and hazard curves for PGA and 0.2 s spectral acceleration (Sa) all at rock site are developed for the city of Addis Ababa. The hazard map of this study corresponding to the 475 return periods has already been used to update and produce the 3rd generation building code of Ethiopia.

  6. Explosion Source Models for Seismic Monitoring at High Frequencies: Quantification of the Damage Source and Further Validation of Models

    DTIC Science & Technology

    2011-09-01

    24. Ferguson, J. F., A. H. Cogbill, and R. G. Warren (1994). A geophysical-geological transect of the Silent Canyon caldera complex, Pahute Mesa...and L. R. Johnson (1987). Velocity structure of Silent Canyon caldera , Nevada Test Site, Bull. Seismol. Soc. Am. 77: 597–613. Murphy J. R. (1996

  7. Empirical Ground Motion Characterization of Induced Seismicity in Alberta and Oklahoma

    NASA Astrophysics Data System (ADS)

    Novakovic, M.; Atkinson, G. M.; Assatourians, K.

    2017-12-01

    We develop empirical ground-motion prediction equations (GMPEs) for ground motions from induced earthquakes in Alberta and Oklahoma following the stochastic-model-based method of Atkinson et al. (2015 BSSA). The Oklahoma ground-motion database is compiled from over 13,000 small to moderate seismic events (M 1 to 5.8) recorded at 1600 seismic stations, at distances from 1 to 750 km. The Alberta database is compiled from over 200 small to moderate seismic events (M 1 to 4.2) recorded at 50 regional stations, at distances from 30 to 500 km. A generalized inversion is used to solve for regional source, attenuation and site parameters. The obtained parameters describe the regional attenuation, stress parameter and site amplification. Resolving these parameters allows for the derivation of regionally-calibrated GMPEs that can be used to compare ground motion observations between waste water injection (Oklahoma) and hydraulic fracture induced events (Alberta), and further compare induced observations with ground motions resulting from natural sources (California, NGAWest2). The derived GMPEs have applications for the evaluation of hazards from induced seismicity and can be used to track amplitudes across the regions in real time, which is useful for ground-motion-based alerting systems and traffic light protocols.

  8. Focal mechanism determination for induced seismicity using the neighbourhood algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Yuyang; Zhang, Haijiang; Li, Junlun; Yin, Chen; Wu, Furong

    2018-06-01

    Induced seismicity is widely detected during hydraulic fracture stimulation. To better understand the fracturing process, a thorough knowledge of the source mechanism is required. In this study, we develop a new method to determine the focal mechanism for induced seismicity. Three misfit functions are used in our method to measure the differences between observed and modeled data from different aspects, including the waveform, P wave polarity and S/P amplitude ratio. We minimize these misfit functions simultaneously using the neighbourhood algorithm. Through synthetic data tests, we show the ability of our method to yield reliable focal mechanism solutions and study the effect of velocity inaccuracy and location error on the solutions. To mitigate the impact of the uncertainties, we develop a joint inversion method to find the optimal source depth and focal mechanism simultaneously. Using the proposed method, we determine the focal mechanisms of 40 stimulation induced seismic events in an oil/gas field in Oman. By investigating the results, we find that the reactivation of pre-existing faults is the main cause of the induced seismicity in the monitored area. Other observations obtained from the focal mechanism solutions are also consistent with earlier studies in the same area.

  9. The excitation of long period seismic waves by a source spanning a structural discontinuity

    NASA Astrophysics Data System (ADS)

    Woodhouse, J. H.

    Simple theoretical results are obtained for the excitation of seismic waves by an indigenous seismic source in the case that the source volume is intersected by a structural discontinuity. In the long wavelength approximation the seismic radiation is identical to that of a point source placed on one side of the discontinuity or of a different point source placed on the other side. The moment tensors of these two equivalent sources are related by a specific linear transformation and may differ appreciably both in magnitude and geometry. Either of these sources could be obtained by linear inversion of seismic data but the physical interpretation is more complicated than in the usual case. A source which involved no volume change would, for example, yield an isotropic component if, during inversion, it were assumed to lie on the wrong side of the discontinuity. The problem of determining the true moment tensor of the source is indeterminate unless further assumptions are made about the stress glut distribution; one way to resolve this indeterminancy is to assume proportionality between the integrated stress glut on each side of the discontinuity.

  10. Automated classification of seismic sources in a large database: a comparison of Random Forests and Deep Neural Networks.

    NASA Astrophysics Data System (ADS)

    Hibert, Clement; Stumpf, André; Provost, Floriane; Malet, Jean-Philippe

    2017-04-01

    In the past decades, the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring of crustal and surface processes. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, which include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators and because hundreds of thousands of seismic signals have to be processed. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. In this study, we evaluate the ability of machine learning algorithms for the analysis of seismic sources at the Piton de la Fournaise volcano being Random Forest and Deep Neural Network classifiers. We gather a catalog of more than 20,000 events, belonging to 8 classes of seismic sources. We define 60 attributes, based on the waveform, the frequency content and the polarization of the seismic waves, to parameterize the seismic signals recorded. We show that both algorithms provide similar positive classification rates, with values exceeding 90% of the events. When trained with a sufficient number of events, the rate of positive identification can reach 99%. These very high rates of positive identification open the perspective of an operational implementation of these algorithms for near-real time monitoring of mass movements and other environmental sources at the local, regional and even global scale.

  11. 3D shallow velocity model in the area of Pozzo Pitarrone, NE flank of Mt. Etna Volcano, by using SPAC array method.

    NASA Astrophysics Data System (ADS)

    Zuccarello, Luciano; Paratore, Mario; La Rocca, Mario; Ferrari, Ferruccio; Messina, Alfio; Contrafatto, Danilo; Galluzzo, Danilo; Rapisarda, Salvatore

    2016-04-01

    In volcanic environment the propagation of seismic signals through the shallowest layers is strongly affected by lateral heterogeneity, attenuation, scattering, and interaction with the free surface. Therefore tracing a seismic ray from the recording site back to the source is a complex matter, with obvious implications for the source location. For this reason the knowledge of the shallow velocity structure may improve the location of shallow volcano-tectonic earthquakes and volcanic tremor, thus contributing to improve the monitoring of volcanic activity. This work focuses on the analysis of seismic noise and volcanic tremor recorded in 2014 by a temporary array installed around Pozzo Pitarrone, NE flank of Mt. Etna. Several methods permit a reliable estimation of the shear wave velocity in the shallowest layers through the analysis of stationary random wavefield like the seismic noise. We have applied the single station HVSR method and SPAC array method to seismic noise to investigate the local shallow structure. The inversion of dispersion curves produced a shear wave velocity model of the area reliable down to depth of about 130 m. We also applied the Beam Forming array method in the 0.5 Hz - 4 Hz frequency range to both seismic noise and volcanic tremor. The apparent velocity of coherent tremor signals fits quite well the dispersion curve estimated from the analysis of seismic noise, thus giving a further constrain on the estimated velocity model. Moreover, taking advantage of a borehole station installed at 130 m depth in the same area of the array, we obtained a direct estimate of the P-wave velocity by comparing the borehole recordings of local earthquakes with the same event recorded at surface. Further insight on the P-wave velocity in the upper 130 m layer comes from the surface reflected wave visible in some cases at the borehole station. From this analysis we obtained an average P-wave velocity of about 1.2 km/s, in good agreement with the shear wave velocity found from the analysis of seismic noise. To better constrain the inversion we used the HVSR computed at each array station, which also give a lateral extension to the final 3D velocity model. The obtained results indicate that site effects in the investigate area are quite homogeneous among the array stations.

  12. Near surface characterisation with passive seismic data - a case study from the La Barge basin (Wyoming)

    NASA Astrophysics Data System (ADS)

    Behm, M.; Snieder, R.; Tomic, J.

    2012-12-01

    In regions where active source seismic data are inadequate for imaging purposes due to energy penetration and recovery, cost and logistical concerns, or regulatory restrictions, analysis of natural source and ambient seismic data may provide an alternative. In this study, we investigate the feasibility of using locally-generated seismic noise and teleseismic events in the 2-10 Hz band to obtain a subsurface model. We apply different techniques to 3-component data recorded during the LaBarge Passive Seismic Experiment, a local deployment in southwestern Wyoming in a producing hydrocarbon basin. Fifty-five broadband instruments with an inter-station distance of 250 m recorded continuous seismic data between November 2008 and June 2009. The consistency and high quality of the data set make it an ideal test ground to determine the value of passive seismology techniques for exploration purposes. The near surface is targeted by interferometric analysis of ambient noise. Our results indicate that traffic noise from a state highway generates coherent Rayleigh and Love waves that can then be inverted for laterally varying velocities. The results correlate well with surface geology, and are thought to represent the average of the few upper hundred meters. The autocorrelation functions (ACF) of teleseismic body waves provide information on the uppermost part (1 to 5 km depth) of the crust. ACFs from P-waves correlate with the shallow structure as known from active source studies. The analysis of S-waves exhibits a pronounced azimuthal dependency, which might be used to gain insights on anisotropy.

  13. Co- and post-seismic shallow fault physics from near-field geodesy, seismic tomography, and mechanical modeling

    NASA Astrophysics Data System (ADS)

    Nevitt, J.; Brooks, B. A.; Catchings, R.; Goldman, M.; Criley, C.; Chan, J. H.; Glennie, C. L.; Ericksen, T. L.; Madugo, C. M.

    2017-12-01

    The physics governing near-surface fault slip and deformation are largely unknown, introducing significant uncertainty into seismic hazard models. Here we combine near-field measurements of surface deformation from the 2014 M6.0 South Napa earthquake with high-resolution seismic imaging and finite element models to investigate the effects of rupture speed, elastic heterogeneities, and plasticity on shallow faulting. We focus on two sites that experienced either predominantly co-seismic or post-seismic slip. We measured surface deformation with mobile laser scanning of deformed vine rows within 300 m of the fault at 1 week and 1 month after the event. Shear strain profiles for the co- and post-seismic sites are similar, with maxima of 0.012 and 0.013 and values exceeding 0.002 occurring within 26 m- and 18 m-wide zones, respectively. That the rupture remained buried at the two sites and produced similar deformation fields suggests that permanent deformation due to dynamic stresses did not differ significantly from the quasi-static case, which might be expected if the rupture decelerated as it approached the surface. Active-source seismic surveys, 120 m in length with 1 m geophone/shot spacing, reveal shallow compliant zones of reduced shear modulus. For the co- and post-seismic sites, the tomographic anomaly (Vp/Vs > 5) at 20 m depth has a width of 80 m and 50 m, respectively, much wider than the observed surface displacement fields. We investigate this discrepancy with a suite of finite element models in which a planar fault is buried 5 m below the surface. The model continuum is defined by either homogeneous or heterogeneous elastic properties, with or without Drucker-Prager plastic yielding, with properties derived from lab testing of similar near-surface materials. We find that plastic yielding can greatly narrow the surface displacement zone, but that the width of this zone is largely insensitive to changes in the elastic structure (i.e., the presence of a compliant zone).

  14. Basin analysis of North Sea viking graben: new techniques in an old basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliffe, J.E.; Cao, S.; Lerche, I.

    1987-05-01

    Rapid sedimentation rates from the Upper Cretaceous to Tertiary in the North Sea require that burial history modeling account for overpressuring. Use of a quantitative fluid flow/compaction model, along with the inversion of thermal indicators to obtain independent estimates of paleoheat flu, can greatly enhance their knowledge of a basin's evolution and hydrocarbon potential. First they assess the modeling sensitivity to the quality of data and variation of other input parameters. Then application to 16 wells with vitrinite data in the Viking graben north of 59/sup 0/ latitude and to pseudo-wells derived from deep seismic profiling of BIRPA greatly enhancesmore » the study of regional variations. A Tissot generation model is run on all the wells for each potential source rock. The resulting amounts of oil and gas generated are contoured to produce a regional oil and gas provenance map for each source rock. The model results are compared and tested against the known producing fields. Finally, by restoration of the two-dimensional seismic reflection profiles, the temporal variations of basement subsidence and paleoheat flow are related to the tectonic zoning of the region and to the extensional history. The combined structural, thermal, and depositional information available due to technological progress in both modeling and deep seismic profiling allows a better understanding of previously proposed models of extension.« less

  15. Upper-crustal structure of the inner Continental Borderland near Long Beach, California

    USGS Publications Warehouse

    Baher, S.; Fuis, G.; Sliter, R.; Normark, W.R.

    2005-01-01

    A new P-wave velocity/structural model for the inner Continental Borderland (ICB) region was developed for the area near Long Beach, California. It combines controlled-source seismic reflection and refraction data collected during the 1994 Los Angeles Region Seismic Experiment (LARSE), multichannel seismic reflection data collected by the U.S. Geological Survey (1998-2000), and nearshore borehole stratigraphy. Based on lateral velocity contrasts and stratigraphic variation determined from borehole data, we are able to locate major faults such as the Cabrillo, Palos Verdes, THUMS-Huntington Beach, and Newport Inglewood fault zones, along with minor faults such as the slope fault, Avalon knoll, and several other yet unnamed faults. Catalog seismicity (1975-2002) plotted on our preferred velocity/structural model shows recent seismicity is located on 16 out of our 24 faults, providing evidence for continuing concern with respect to the existing seismic-hazard estimates. Forward modeling of P-wave arrival times on the LARSE line 1 resulted in a four-layer model that better resolves the stratigraphy and geologic structures of the ICB and also provides tighter constraints on the upper-crustal velocity structure than previous modeling of the LARSE data. There is a correlation between the structural horizons identified in the reflection data with the velocity interfaces determined from forward modeling of refraction data. The strongest correlation is between the base of velocity layer 1 of the refraction model and the base of the planar sediment beneath the shelf and slope determined by the reflection model. Layers 2 and 3 of the velocity model loosely correlate with the diffractive crust layer, locally interpreted as Catalina Schist.

  16. Full Waveform Modelling for Subsurface Characterization with Converted-Wave Seismic Reflection

    NASA Astrophysics Data System (ADS)

    Triyoso, Wahyu; Oktariena, Madaniya; Sinaga, Edycakra; Syaifuddin, Firman

    2017-04-01

    While a large number of reservoirs have been explored using P-waves seismic data, P-wave seismic survey ceases to provide adequate result in seismically and geologically challenging areas, like gas cloud, shallow drilling hazards, strong multiples, highly fractured, anisotropy. Most of these reservoir problems can be addressed using P and PS seismic data combination. Multicomponent seismic survey records both P-wave and S-wave unlike conventional survey that only records compressional P-wave. Under certain conditions, conventional energy source can be used to record P and PS data using the fact that compressional wave energy partly converts into shear waves at the reflector. Shear component can be recorded using down going P-wave and upcoming S-wave by placing a horizontal component geophone on the ocean floor. A synthetic model is created based on real data to analyze the effect of gas cloud existence to PP and PS wave reflections which has a similar characteristic to Sub-Volcanic imaging. The challenge within the multicomponent seismic is the different travel time between P-wave and S-wave, therefore the converted-wave seismic data should be processed with different approach. This research will provide a method to determine an optimum converted point known as Common Conversion Point (CCP) that can solve the Asymmetrical Conversion Point of PS data. The value of γ (Vp/Vs) is essential to estimate the right CCP that will be used in converted-wave seismic processing. This research will also continue to the advanced processing method of converted-wave seismic by applying Joint Inversion to PP&PS seismic. Joint Inversion is a simultaneous model-based inversion that estimates the P&S-wave impedance which are consistent with the PP&PS amplitude data. The result reveals a more complex structure mirrored in PS data below the gas cloud area. Through estimated γ section resulted from Joint Inversion, we receive a better imaging improvement below gas cloud area tribute to the converted-wave seismic as additional constrain.

  17. Seismic Sources for the Territory of Georgia

    NASA Astrophysics Data System (ADS)

    Tsereteli, N. S.; Varazanashvili, O.

    2011-12-01

    The southern Caucasus is an earthquake prone region where devastating earthquakes have repeatedly caused significant loss of lives, infrastructure and buildings. High geodynamic activity of the region expressed in both seismic and aseismic deformations, is conditioned by the still-ongoing convergence of lithospheric plates and northward propagation of the Afro-Arabian continental block at a rate of several cm/year. The geometry of tectonic deformations in the region is largely determined by the wedge-shaped rigid Arabian block intensively intended into the relatively mobile Middle East-Caucasian region. Georgia is partner of ongoing regional project EMME. The main objective of EMME is calculation of Earthquake hazard uniformly with heights standards. One approach used in the project is the probabilistic seismic hazard assessment. In this approach the first parameter requirement is the definition of seismic source zones. Seismic sources can be either faults or area sources. Seismoactive structures of Georgia are identified mainly on the basis of the correlation between neotectonic structures of the region and earthquakes. Requirements of modern PSH software to geometry of faults is very high. As our knowledge of active faults geometry is not sufficient, area sources were used. Seismic sources are defined as zones that are characterized with more or less uniform seismicity. Poor knowledge of the processes occurring in deep of the Earth is connected with complexity of direct measurement. From this point of view the reliable data obtained from earthquake fault plane solution is unique for understanding the character of a current tectonic life of investigated area. There are two methods of identification if seismic sources. The first is the seimsotectonic approach, based on identification of extensive homogeneous seismic sources (SS) with the definition of probability of occurrence of maximum earthquake Mmax. In the second method the identification of seismic sources will be obtained on the bases of structural geology, parameters of seismicity and seismotectonics. This last approach was used by us. For achievement of this purpose it was necessary to solve following problems: to calculate the parameters of seismotectonic deformation; to reveal regularities in character of earthquake fault plane solution; use obtained regularities to develop principles of an establishment of borders between various hierarchical and scale levels of seismic deformations fields and to give their geological interpretation; Three dimensional matching of active faults with real geometrical dimension and earthquake sources have been investigated. Finally each zone have been defined with the parameters: the geometry, the magnitude-frequency parameters, maximum magnitude, and depth distribution as well as modern dynamical characteristics widely used for complex processes

  18. Excitation mechanisms for Jovian seismic modes

    NASA Astrophysics Data System (ADS)

    Markham, Steve; Stevenson, Dave

    2018-05-01

    Recent (2011) results from the Nice Observatory indicate the existence of global seismic modes on Jupiter in the frequency range between 0.7 and 1.5 mHz with amplitudes of tens of cm/s. Currently, the driving force behind these modes is a mystery; the measured amplitudes are many orders of magnitude larger than anticipated based on theory analogous to helioseismology (that is, turbulent convection as a source of stochastic excitation). One of the most promising hypotheses is that these modes are driven by Jovian storms. This work constructs a framework to analytically model the expected equilibrium normal mode amplitudes arising from convective columns in storms. We also place rough constraints on Jupiter's seismic modal quality factor. Using this model, neither meteor strikes, turbulent convection, nor water storms can feasibly excite the order of magnitude of observed amplitudes. Next we speculate about the potential role of rock storms deeper in Jupiter's atmosphere, because the rock storms' expected energy scales make them promising candidates to be the chief source of excitation for Jovian seismic modes, based on simple scaling arguments. We also suggest some general trends in the expected partition of energy between different frequency modes. Finally we supply some commentary on potential applications to gravity, Juno, Cassini and Saturn, and future missions to Uranus and Neptune.

  19. Numerical investigations on mapping permeability heterogeneity in coal seam gas reservoirs using seismo-electric methods

    NASA Astrophysics Data System (ADS)

    Gross, L.; Shaw, S.

    2016-04-01

    Mapping the horizontal distribution of permeability is a key problem for the coal seam gas industry. Poststack seismic data with anisotropy attributes provide estimates for fracture density and orientation which are then interpreted in terms of permeability. This approach delivers an indirect measure of permeability and can fail if other sources of anisotropy (for instance stress) come into play. Seismo-electric methods, based on recording the electric signal from pore fluid movements stimulated through a seismic wave, measure permeability directly. In this paper we use numerical simulations to demonstrate that the seismo-electric method is potentially suitable to map the horizontal distribution of permeability changes across coal seams. We propose the use of an amplitude to offset (AVO) analysis of the electrical signal in combination with poststack seismic data collected during the exploration phase. Recording of electrical signals from a simple seismic source can be closer to production planning and operations. The numerical model is based on a sonic wave propagation model under the low frequency, saturated media assumption and uses a coupled high order spectral element and low order finite element solver. We investigate the impact of seam thickness, coal seam layering, layering in the overburden and horizontal heterogeneity of permeability.

  20. Estimation of source processes of the 2016 Kumamoto earthquakes from strong motion waveforms

    NASA Astrophysics Data System (ADS)

    Kubo, H.; Suzuki, W.; Aoi, S.; Sekiguchi, H.

    2016-12-01

    In this study, we estimated the source processes for two large events of the 2016 Kumamoto earthquakes (the M7.3 event at 1:25 JST on April 16, 2016 and the M6.5 event at 21:26 JST on April 14, 2016) from strong motion waveforms using multiple-time-window linear waveform inversion (Hartzell and Heaton 1983; Sekiguchi et al. 2000). Based on the observations of surface ruptures, the spatial distribution of aftershocks, and the geodetic data, a realistic curved fault model was developed for the source-process analysis of the M7.3 event. The source model obtained for the M7.3 event with a seismic moment of 5.5 × 1019 Nm (Mw 7.1) had two significant ruptures. One rupture propagated toward the northeastern shallow region at 4 s after rupture initiation, and continued with large slips to approximately 16 s. This rupture caused a large slip region with a peak slip of 3.8 m that was located 10-30 km northeast of the hypocenter and reached the caldera of Mt. Aso. The contribution of the large slip region to the seismic waveforms was large at many stations. Another rupture propagated toward the surface from the hypocenter at 2-6 s, and then propagated toward the northeast along the near surface at 6-10 s. This rupture largely contributed to the seismic waveforms at the stations south of the fault and close to the hypocenter. A comparison with the results obtained using a single fault plane model demonstrate that the use of the curved fault model led to improved waveform fit at the stations south of the fault. The extent of the large near-surface slips in this source model for the M7.3 event is roughly consistent with the extent of the observed large surface ruptures. The source model obtained for the M6.5 event with a seismic moment of 1.7 × 1018 Nm (Mw 6.1) had large slips in the region around the hypocenter and in the shallow region north-northeast of the hypocenter, both of which had a maximum slip of 0.7 m. The rupture of the M6.5 event propagated from the former region to the latter region at 1-6 s after rupture initiation, which is expected to have caused the strong ground motions due to the forward directivity effect at KMMH16 and surroundings. The occurrence of the near-surface large slips in this source model for the M6.5 event is consistent with the appearance of small surface cracks, which were observed by some residents.

  1. Spatial extent of a hydrothermal system at Kilauea Volcano, Hawaii, determined from array analyses of shallow long-period seismicity 1. Method

    USGS Publications Warehouse

    Almendros, J.; Chouet, B.; Dawson, P.

    2001-01-01

    We present a probabilistic method to locate the source of seismic events using seismic antennas. The method is based on a comparison of the event azimuths and slownesses derived from frequency-slowness analyses of array data, with a slowness vector model. Several slowness vector models are considered including both homogeneous and horizontally layered half-spaces and also a more complex medium representing the actual topography and three-dimensional velocity structure of the region under study. In this latter model the slowness vector is obtained from frequency-slowness analyses of synthetic signals. These signals are generated using the finite difference method and include the effects of topography and velocity structure to reproduce as closely as possible the behavior of the observed wave fields. A comparison of these results with those obtained with a homogeneous half-space demonstrates the importance of structural and topographic effects, which, if ignored, lead to a bias in the source location. We use synthetic seismograms to test the accuracy and stability of the method and to investigate the effect of our choice of probability distributions. We conclude that this location method can provide the source position of shallow events within a complex volcanic structure such as Kilauea Volcano with an error of ??200 m. Copyright 2001 by the American Geophysical Union.

  2. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-10-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  3. Time-lapse seismic - repeatability versus usefulness and 2D versus 3D

    NASA Astrophysics Data System (ADS)

    Landro, M.

    2017-12-01

    Time-lapse seismic has developed rapidly over the past decades, especially for monitoring of oil and gas reservoirs and subsurface storage of CO2. I will review and discuss some of the critical enabling factors for the commercial success of this technology. It was early realized that how well we are able to repeat our seismic experiment is crucial. However, it is always a question of detectability versus repeatability. For marine seismic, there are several factors limiting the repeatability: Weather conditions, positioning of sources and receivers and so on. I will discuss recent improvements in both acquisition and processing methods over the last decade. It is well known that repeated 3D seismic data is the most accurate tool for reservoir monitoring purposes. However, several examples show that 2D seismic data may be used for monitoring purposes despite lower repeatability. I will use examples from an underground blow out in the North Sea, and repeated 2D seismic lines acquired before and after the Tohoku earthquake in 2011 to illustrate this. A major challenge when using repeated 2D seismic for subsurface monitoring purposes is the lack of 3D calibration points and significantly less amount of data. For marine seismic acquisition, feathering issues and crossline dip effects become more critical compared to 3D seismic acquisition. Furthermore, the uncertainties arising from a non-ideal 2D seismic acquisition are hard to assess, since the 3D subsurface geometry has not been mapped. One way to shed more light on this challenge is to use 3D time lapse seismic modeling testing various crossline dips or geometries. Other ways are to use alternative data sources, such as bathymetry, time lapse gravity or electromagnetic data. The end result for all time-lapse monitoring projects is an interpretation associated with uncertainties, and for the 2D case these uncertainties are often large. The purpose of this talk is to discuss how to reduces and control these uncertainties as much as possible.

  4. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.

  5. Historical seismicity in the Middle East: new insights from Ottoman primary sources (sixteenth to mid-eighteenth centuries)

    NASA Astrophysics Data System (ADS)

    Tülüveli, Güçlü

    2015-10-01

    Considerable academic effort has been given to chart the history of the seismic activity in Middle East region. This short survey intends to contribute to these scientific attempts by analyzing Ottoman primary sources. There had been previous studies which utilized similar primary sources from Ottoman archives, yet 15 new earthquakes emerged from these sources. Moreover, the seismic impact of five known earthquakes will be analyzed in the light of new data from Ottoman primary sources. A possible tsunami case is also included in this section. The sources cover the period between sixteenth to the end of the eighteenth century. This article intends to foster interdisciplinary dialogue for the purpose of initiating further detailed studies on past seismic events.

  6. Adaptive neuro-fuzzy inference systems for semi-automatic discrimination between seismic events: a study in Tehran region

    NASA Astrophysics Data System (ADS)

    Vasheghani Farahani, Jamileh; Zare, Mehdi; Lucas, Caro

    2012-04-01

    Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.

  7. Building a risk-targeted regional seismic hazard model for South-East Asia

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Nyst, M.; Seyhan, E.

    2015-12-01

    The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.

  8. An unified numerical simulation of seismic ground motion, ocean acoustics, coseismic deformations and tsunamis of 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Furumura, T.; Noguchi, S.; Takemura, S.; Iwai, K.; Lee, S.; Sakai, S.; Shinohara, M.

    2011-12-01

    The fault rupture of the 2011 Tohoku (Mw9.0) earthquake spread approximately 550 km by 260 km with a long source rupture duration of ~200 s. For such large earthquake with a complicated source rupture process the radiation of seismic wave from the source rupture and initiation of tsunami due to the coseismic deformation is considered to be very complicated. In order to understand such a complicated process of seismic wave, coseismic deformation and tsunami, we proposed a unified approach for total modeling of earthquake induced phenomena in a single numerical scheme based on a finite-difference method simulation (Maeda and Furumura, 2011). This simulation model solves the equation of motion of based on the linear elastic theory with equilibrium between quasi-static pressure and gravity in the water column. The height of tsunami is obtained from this simulation as a vertical displacement of ocean surface. In order to simulate seismic waves, ocean acoustics, coseismic deformations, and tsunami from the 2011 Tohoku earthquake, we assembled a high-resolution 3D heterogeneous subsurface structural model of northern Japan. The area of simulation is 1200 km x 800 km and 120 km in depth, which have been discretized with grid interval of 1 km in horizontal directions and 0.25 km in vertical direction, respectively. We adopt a source-rupture model proposed by Lee et al. (2011) which is obtained by the joint inversion of teleseismic, near-field strong motion, and coseismic deformation. For conducting such a large-scale simulation, we fully parallelized our simulation code based on a domain-partitioning procedure which achieved a good speed-up by parallel computing up to 8192 core processors with parallel efficiency of 99.839%. The simulation result demonstrates clearly the process in which the seismic wave radiates from the complicated source rupture over the fault plane and propagating in heterogeneous structure of northern Japan. Then, generation of tsunami from coseismic ground deformation at sea floor due to the earthquake and propagation is also well demonstrated . The simulation also demonstrates that a very large slip up to 40 m at shallow plate boundary near the trench pushes up sea floor with source rupture propagation, and the highly elevated sea surface gradually start propagation as tsunamis due to the gravity. The result of simulation of vertical-component displacement waveform matches the ocean-bottom pressure gauge record which is installed just above the source fault area (Maeda et al., 2011) very consistently. Strong reverberation of the ocean-acoustic waves between sea surface and sea bottom particularly near the Japan Trench for long time after the source rupture ends is confirmed in the present simulation. Accordingly, long wavetrains of high-frequency ocean acoustic waves is developed and overlap to later tsunami waveforms as we found in the observations.

  9. Constraining the crustal root geometry beneath the Rif Cordillera (North Morocco)

    NASA Astrophysics Data System (ADS)

    Diaz, Jordi; Gil, Alba; Carbonell, Ramon; Gallart, Josep; Harnafi, Mimoun

    2016-04-01

    The analyses of wide-angle reflections of controlled source experiments and receiver functions calculated from teleseismic events provide consistent constraints of an over-thickened crust beneath the Rif Cordillera (North Morocco). Regarding active source data, we investigate now offline arrivals of Moho-reflected phases recorded in RIFSIS project to get new estimations of 3D crustal thickness variations beneath North Morocco. Additional constrains on the onshore-offshore transition are derived from onland recording of marine airgun shots from the coeval Gassis-Topomed profiles. A regional crustal thickness map is computed from all these results. In parallel, we use natural seismicity data collected throughout TopoIberia and PICASSO experiments, and from a new RIFSIS deployment, to obtain teleseismic receiver functions and explore the crustal thickness variations with a H-κ grid-search approach. The use of a larger dataset including new stations covering the complex areas beneath the Rif Cordillera allow us to improve the resolution of previous contributions, revealing abrupt crustal changes beneath the region. A gridded surface is built up by interpolating the Moho depths inferred for each seismic station, then compared with the map from controlled source experiments. A remarkably consistent image is observed in both maps, derived from completely independent data and methods. Both approaches document a large modest root, exceeding 50 km depth in the central part of the Rif, in contrast with the rather small topographic elevations. This large crustal thickness, consistent with the available Bouguer anomaly data, favor models proposing that the high velocity slab imaged by seismic tomography beneath the Alboran Sea is still attached to the lithosphere beneath the Rif, hence pulling down the lithosphere and thickening the crust. The thickened area corresponds to a quiet seismic zone located between the western Morocco arcuate seismic zone, the deep seismicity area beneath western Alboran Sea and the superficial seismicity in Alhoceima area. Therefore, the presence of a crustal root seems to play also a major role in the seismicity distribution in northern Morocco.

  10. Induced Seismicity from different sources in Italy: how to interpret it?

    NASA Astrophysics Data System (ADS)

    Pastori, M.; De Gori, P.; Piccinini, D.; Bagh, S.; Improta, L.; Chiarabba, C.

    2015-12-01

    Typically the term "induced seismicity" is used to refer minor earthquakes and tremors caused by human activities that alter the stresses and strains on the Earth's crust. In the last years, the interest in the induced seismicity related to fluids (oil and gas, and geothermal resources) extraction or injection is increased, because it is believed to be responsible to enucleate earthquakes. Possible sources of induced seismicity are not only represented by the oil and gas production but also, i.e., by changes in the water level of artificial lakes. The aim of this work is to show results from two different sources, wastewater injection and changes in the water level of an artificial reservoir (Pertusillo lake), that can produce induced earthquakes observed in the Val d'Agri basin (Italy) and to compare them with variation in crustal elastic parameters. Val d'Agri basin in the Apennines extensional belt hosts the largest oilfield in onshore Europe and is bordered by NW-SE ­trending fault systems. Most of the recorded seismicity seems to be related to these structures. We correlated the seismicity rate, injection curves and changes in water levels with temporal variations of Vp/Vs and anisotropic parameters of the crustal reservoirs and in the nearby area. We analysed about 983 high-quality recordings occurred from 2002 to 2014 in Val d'Agri basin from temporary and permanent network held by INGV and ENI corporate. 3D high-precision locations and manual-revised P- and S-picking are used to estimate anisotropic parameters (delay time and fast direction polarization) and Vp/Vs ratio. Seismicity is mainly located in two areas: in the SW of the Pertusillo Lake, and near the Eni Oil field (SW and NE of the Val d'Agri basin respectively). Our correlations well recognize the seismicity diffusion process, caused by both water injection and water level changes; these findings could help to model the active and pre-existing faults failure behaviour.

  11. Source model for the Copahue volcano magmaplumbing system constrained by InSARsurface deformation observations

    NASA Astrophysics Data System (ADS)

    Lundgren, P.; Nikkhoo, M.; Samsonov, S. V.; Milillo, P.; Gil-Cruz, F., Sr.; Lazo, J.

    2017-12-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentinaborder in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue'ssource models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformationobservations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span theentire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively,slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending tracktime series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate ofvolume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting theshallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonicseismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time seriesalso show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations forright-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera forboth RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion withinthe caldera are caused by the modeled sources. Together, the InSAR-constrained source model and theseismicity suggest a deep conduit or transfer zone where magma moves from the central caldera toCopahue's upper edifice.

  12. 1D Seismic reflection technique to increase depth information in surface seismic investigations

    NASA Astrophysics Data System (ADS)

    Camilletti, Stefano; Fiera, Francesco; Umberto Pacini, Lando; Perini, Massimiliano; Prosperi, Andrea

    2017-04-01

    1D seismic methods, such as MASW Re.Mi. and HVSR, have been extensively used in engineering investigations, bedrock research, Vs profile and to some extent for hydrologic applications, during the past 20 years. Recent advances in equipment, sound sources and computer interpretation techniques, make 1D seismic methods highly effective in shallow subsoil modeling. Classical 1D seismic surveys allows economical collection of subsurface data however they fail to return accurate information for depths greater than 50 meters. Using a particular acquisition technique it is possible to collect data that can be quickly processed through reflection technique in order to obtain more accurate velocity information in depth. Furthermore, data processing returns a narrow stratigraphic section, alongside the 1D velocity model, where lithological boundaries are represented. This work will show how collect a single-CMP to determine: (1) depth of bedrock; (2) gravel layers in clayey domains; (3) accurate Vs profile. Seismic traces was processed by means a new software developed in collaboration with SARA electronics instruments S.r.l company, Perugia - ITALY. This software has the great advantage of being able to be used directly in the field in order to reduce the times elapsing between acquisition and processing.

  13. A transparent and data-driven global tectonic regionalization model for seismic hazard assessment

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Shin; Weatherill, Graeme; Pagani, Marco; Cotton, Fabrice

    2018-05-01

    A key concept that is common to many assumptions inherent within seismic hazard assessment is that of tectonic similarity. This recognizes that certain regions of the globe may display similar geophysical characteristics, such as in the attenuation of seismic waves, the magnitude scaling properties of seismogenic sources or the seismic coupling of the lithosphere. Previous attempts at tectonic regionalization, particularly within a seismic hazard assessment context, have often been based on expert judgements; in most of these cases, the process for delineating tectonic regions is neither reproducible nor consistent from location to location. In this work, the regionalization process is implemented in a scheme that is reproducible, comprehensible from a geophysical rationale, and revisable when new relevant data are published. A spatial classification-scheme is developed based on fuzzy logic, enabling the quantification of concepts that are approximate rather than precise. Using the proposed methodology, we obtain a transparent and data-driven global tectonic regionalization model for seismic hazard applications as well as the subjective probabilities (e.g. degree of being active/degree of being cratonic) that indicate the degree to which a site belongs in a tectonic category.

  14. Integrated Potential-field Studies in Support of Energy Resource Assessment in Frontier Areas of Alaska

    NASA Astrophysics Data System (ADS)

    Phillips, J. D.; Saltus, R. W.; Potter, C. J.; Stanley, R. G.; Till, A. B.

    2008-05-01

    In frontier areas of Alaska, potential-field studies play an important role in characterizing the geologic structure of sedimentary basins having potential for undiscovered oil and gas resources. Two such areas are the Yukon Flats basin in the east-central interior of Alaska, and the coastal plain of the Arctic National Wildlife Refuge (ANWR) in northeastern Alaska. The Yukon Flats basin is a potential source of hydrocarbon resources for local consumption and possible export. Knowledge of the subsurface configuration of the basin is restricted to a few seismic reflection profiles covering a limited area and one well. The seismic profiles were reprocessed and reinterpreted in preparation for an assessment of the oil and gas resources of the basin. The assessment effort required knowledge of the basin configuration away from the seismic profiles, as well as an understanding of the nature of the underlying basement. To extend the interpretation of the basin thickness across the entire area of the basin, an iterative Jachens-Moring gravity inversion was performed on gridded quasi-isostatic residual gravity anomaly data. The inversion was constrained to agree with the interpreted basement surface along the seismic profiles. In addition to the main sedimentary depocenter interpreted from the seismic data as having over 8 km of fill, the gravity inversion indicated a depocenter with over 7 km of fill in the Crooked Creek sub-basin. Results for the Crooked Creek sub-basin are consistent with magnetic and magnetotelluric modeling, but they await confirmation by drilling or seismic profiling. Whether hydrocarbon source rocks are present in the pre-Cenozoic basement beneath Yukon Flats is difficult to determine because extensive surficial deposits obscure the bedrock geology, and no deep boreholes penetrate basement. The color and texture patterns in a red-green-blue composite image consisting of reduced-to-the-pole aeromagnetic data (red), magnetic potential (blue), and basement gravity (green) highlight domains with common geophysical characteristics and, by inference, lithology. The observed patterns suggest that much of the basin is underlain by Devonian to Jurassic oceanic rocks that probably have little or no potential for hydrocarbon generation. The coastal plain surficial deposits in the northern part of ANWR conceal another frontier basin with hydrocarbon potential. Proprietary aeromagnetic and gravity data were used, along with seismic reflection profiles, to construct a structural and stratigraphic model of this highly deformed sedimentary basin for use in an energy resource assessment. Matched-filtering techniques were used to separate short-wavelength magnetic and gravity anomalies attributed to sources near the top of the sedimentary section from longer-wavelength anomalies attributed to deeper basin and basement sources. Models along the seismic reflection lines indicate that the primary sources of the short-wavelength anomalies are folded and faulted sedimentary beds truncated at the Pleistocene erosion surface. In map view, the aeromagnetic and gravity anomalies produced by the sedimentary units were used to identify possible structural trapping features and geometries, but they also indicated that these features may be significantly disrupted by faulting.

  15. Sequential combination of multi-source satellite observations for separation of surface deformation associated with serial seismic events

    NASA Astrophysics Data System (ADS)

    Chen, Qiang; Xu, Qian; Zhang, Yijun; Yang, Yinghui; Yong, Qi; Liu, Guoxiang; Liu, Xianwen

    2018-03-01

    Single satellite geodetic technique has weakness for mapping sequence of ground deformation associated with serial seismic events, like InSAR with long revisiting period readily leading to mixed complex deformation signals from multiple events. It challenges the observation capability of single satellite geodetic technique for accurate recognition of individual surface deformation and earthquake model. The rapidly increasing availability of various satellite observations provides good solution for overcoming the issue. In this study, we explore a sequential combination of multiple overlapping datasets from ALOS/PALSAR, ENVISAT/ASAR and GPS observations to separate surface deformation associated with the 2011 Mw 9.0 Tohoku-Oki major quake and two strong aftershocks including the Mw 6.6 Iwaki and Mw 5.8 Ibaraki events. We first estimate the fault slip model of major shock with ASAR interferometry and GPS displacements as constraints. Due to the used PALSAR interferogram spanning the period of all the events, we then remove the surface deformation of major shock through forward calculated prediction thus obtaining PALSAR InSAR deformation associated with the two strong aftershocks. The inversion for source parameters of Iwaki aftershock is conducted using the refined PALSAR deformation considering that the higher magnitude Iwaki quake has dominant deformation contribution than the Ibaraki event. After removal of deformation component of Iwaki event, we determine the fault slip distribution of Ibaraki shock using the remained PALSAR InSAR deformation. Finally, the complete source models for the serial seismic events are clearly identified from the sequential combination of multi-source satellite observations, which suggest that the major quake is a predominant mega-thrust rupture, whereas the two aftershocks are normal faulting motion. The estimated seismic moment magnitude for the Tohoku-Oki, Iwaki and Ibaraki evens are Mw 9.0, Mw 6.85 and Mw 6.11, respectively.

  16. High-Frequency Ground-Motion Parameters from Weak-Motion Data in the Sicily Channel and Surrounding Regions

    NASA Astrophysics Data System (ADS)

    D'Amico, Sebastiano; Akinci, Aybige; Pischiutta, Marta

    2018-03-01

    In this paper we characterize the high frequency (1.0 - 10 Hz) seismic wave crustal attenuation and the source excitation in the Sicily Channel and surrounding regions using background seismicity from weak-motion database. The data set includes 15995 waveforms related to earthquakes having local magnitude ranging from 2.0 to 4.5 recorded between 2006 and 2012. The observed and predicted ground motions form the weak-motion data are evaluated in several narrow frequency bands from 0.25 to 20.0 Hz. The filtered observed peaks are regressed to specify a proper functional form for the regional attenuation, excitation and site specific term separately. The results are then used to calibrate effective theoretical attenuation and source excitation models using the Random Vibration Theory (RVT). In the log-log domain, the regional seismic wave attenuation and the geometrical spreading coefficient are modeled together. The geometrical spreading coefficient, g (r), modeled with a bilinear piecewise functional form and given as g (r) ∝ r-1.0 for the short distances (r < 50 km) and as g (r) ∝ r-0.8 for the larger distances (r < 50 km). A frequency-dependent quality factor, inverse of the seismic attenuation parameter, Q(f) = 160 f/fref 0. 35 (where fref = 1.0 Hz), is combined to the geometrical spreading. The source excitation terms are defined at a selected reference distance with a magnitude independent roll-off spectral parameter, κ 0.04 s and with a Brune stress drop parameter increasing with moment magnitude, from Δσ = 2 MPa for Mw = 2.0 to Δσ = 13 MPa for Mw = 4.5. For events M≤4.5 (being Mwmax = 4.5 available in the dataset) the stress parameters are obtained by correlating the empirical/excitation source spectra with the Brune spectral model as function of magnitude. For the larger magnitudes (Mw>4.5) outside the range available in the calibration dataset where we do not have recorded data, we extrapolate our results through the calibration of the stress parameters of the Brune source spectrum over the Bindi et al. (2011) ground motion prediction equation (GMPE) selected as a reference model (hereafter also ITA10).

  17. Detailed seismic velocity structure of the ultra-slow spread crust at the Mid-Cayman Spreading Center from travel-time tomography and synthetic seismograms

    NASA Astrophysics Data System (ADS)

    Harding, J.; Van Avendonk, H. J.; Hayman, N. W.; Grevemeyer, I.; Peirce, C.

    2017-12-01

    The Mid-Cayman Spreading Center (MCSC), an ultraslow-spreading center in the Caribbean Sea, has formed highly variable oceanic crust. Seafloor dredges have recovered extrusive basalts in the axial deeps as well as gabbro on bathymetric highs and exhumed mantle peridotite along the only 110 km MCSC. Wide-angle refraction data were collected with active-source ocean bottom seismometers in April, 2015, along lines parallel and across the MCSC. Travel-time tomography produces relatively smooth 2-D tomographic models of compressional wave velocity. These velocity models reveal large along- and across-axis variations in seismic velocity, indicating possible changes in crustal thickness, composition, faulting, and magmatism. It is difficult, however, to differentiate between competing interpretations of seismic velocity using these tomographic models alone. For example, in some areas the seismic velocities may be explained by either thin igneous crust or exhumed, serpentinized mantle. Distinguishing between these two interpretations is important as we explore the relationships between magmatism, faulting, and hydrothermal venting at ultraslow-spreading centers. We therefore improved our constraints on the shallow seismic velocity structure of the MCSC by modeling the amplitude of seismic refractions in the wide-angle data set. Synthetic seismograms were calculated with a finite-difference method for a range of models with different vertical velocity gradients. Small-scale features in the velocity models, such as steep velocity gradients and Moho boundaries, were explored systematically to best fit the real data. With this approach, we have improved our understanding of the compressional velocity structure of the MCSC along with the geological interpretations that are consistent with three seismic refraction profiles. Line P01 shows a variation in the thinness of lower seismic velocities along the axis, indicating two segment centers, while across-axis lines P02 and P03 show variations in igneous crustal thickness and exhumed mantle in some areas.

  18. Passive seismic imaging based on seismic interferometry: method and its application to image the structure around the 2013 Mw6.6 Lushan earthquake

    NASA Astrophysics Data System (ADS)

    Gu, N.; Zhang, H.

    2017-12-01

    Seismic imaging of fault zones generally involves seismic velocity tomography using first arrival times or full waveforms from earthquakes occurring around the fault zones. However, in most cases seismic velocity tomography only gives smooth image of the fault zone structure. To get high-resolution structure of the fault zones, seismic migration using active seismic data needs to be used. But it is generally too expensive to conduct active seismic surveys, even for 2D. Here we propose to apply the passive seismic imaging method based on seismic interferometry to image fault zone detailed structures. Seismic interferometry generally refers to the construction of new seismic records for virtual sources and receivers by cross correlating and stacking the seismic records on physical receivers from physical sources. In this study, we utilize seismic waveforms recorded on surface seismic stations for each earthquake to construct zero-offset seismic record at each earthquake location as if there was a virtual receiver at each earthquake location. We have applied this method to image the fault zone structure around the 2013 Mw6.6 Lushan earthquake. After the occurrence of the mainshock, a 29-station temporary array is installed to monitor aftershocks. In this study, we first select aftershocks along several vertical cross sections approximately normal to the fault strike. Then we create several zero-offset seismic reflection sections by seismic interferometry with seismic waveforms from aftershocks around each section. Finally we migrate these zero-offset sections to create seismic structures around the fault zones. From these migration images, we can clearly identify strong reflectors, which correspond to major reverse fault where the mainshock occurs. This application shows that it is possible to image detailed fault zone structures with passive seismic sources.

  19. Seismic noise frequency dependent P and S wave sources

    NASA Astrophysics Data System (ADS)

    Stutzmann, E.; Schimmel, M.; Gualtieri, L.; Farra, V.; Ardhuin, F.

    2013-12-01

    Seismic noise in the period band 3-10 sec is generated in the oceans by the interaction of ocean waves. Noise signal is dominated by Rayleigh waves but body waves can be extracted using a beamforming approach. We select the TAPAS array deployed in South Spain between June 2008 and September 2009 and we use the vertical and horizontal components to extract noise P and S waves, respectively. Data are filtered in narrow frequency bands and we select beam azimuths and slownesses that correspond to the largest continuous sources per day. Our procedure automatically discard earthquakes which are localized during short time durations. Using this approach, we detect many more noise P-waves than S-waves. Source locations are determined by back-projecting the detected slowness/azimuth. P and S waves are generated in nearby areas and both source locations are frequency dependent. Long period sources are dominantly in the South Atlantic and Indian Ocean whereas shorter period sources are rather in the North Atlantic Ocean. We further show that the detected S-waves are dominantly Sv-waves. We model the observed body waves using an ocean wave model that takes into account all possible wave interactions including coastal reflection. We use the wave model to separate direct and multiply reflected phases for P and S waves respectively. We show that in the South Atlantic the complex source pattern can be explained by the existence of both coastal and pelagic sources whereas in the North Atlantic most body wave sources are pelagic. For each detected source, we determine the equivalent source magnitude which is compared to the model.

  20. Effects of volcano topography on seismic broad-band waveforms

    NASA Astrophysics Data System (ADS)

    Neuberg, Jürgen; Pointer, Tim

    2000-10-01

    Volcano seismology often deals with rather shallow seismic sources and seismic stations deployed in their near field. The complex stratigraphy on volcanoes and near-field source effects have a strong impact on the seismic wavefield, complicating the interpretation techniques that are usually employed in earthquake seismology. In addition, as most volcanoes have a pronounced topography, the interference of the seismic wavefield with the stress-free surface results in severe waveform perturbations that affect seismic interpretation methods. In this study we deal predominantly with the surface effects, but take into account the impact of a typical volcano stratigraphy as well as near-field source effects. We derive a correction term for plane seismic waves and a plane-free surface such that for smooth topographies the effect of the free surface can be totally removed. Seismo-volcanic sources radiate energy in a broad frequency range with a correspondingly wide range of different Fresnel zones. A 2-D boundary element method is employed to study how the size of the Fresnel zone is dependent on source depth, dominant wavelength and topography in order to estimate the limits of the plane wave approximation. This approximation remains valid if the dominant wavelength does not exceed twice the source depth. Further aspects of this study concern particle motion analysis to locate point sources and the influence of the stratigraphy on particle motions. Furthermore, the deployment strategy of seismic instruments on volcanoes, as well as the direct interpretation of the broad-band waveforms in terms of pressure fluctuations in the volcanic plumbing system, are discussed.

  1. 3D Modelling of Seismically Active Parts of Underground Faults via Seismic Data Mining

    NASA Astrophysics Data System (ADS)

    Frantzeskakis, Theofanis; Konstantaras, Anthony

    2015-04-01

    During the last few years rapid steps have been taken towards drilling for oil in the western Mediterranean sea. Since most of the countries in the region benefit mainly from tourism and considering that the Mediterranean is a closed sea only replenishing its water once every ninety years careful measures are being taken to ensure safe drilling. In that concept this research work attempts to derive a three dimensional model of the seismically active parts of the underlying underground faults in areas of petroleum interest. For that purpose seismic spatio-temporal clustering has been applied to seismic data to identify potential distinct seismic regions in the area of interest. Results have been coalesced with two dimensional maps of underground faults from past surveys and seismic epicentres, having followed careful reallocation processing, have been used to provide information regarding the vertical extent of multiple underground faults in the region of interest. The end product is a three dimensional map of the possible underground location and extent of the seismically active parts of underground faults. Indexing terms: underground faults modelling, seismic data mining, 3D visualisation, active seismic source mapping, seismic hazard evaluation, dangerous phenomena modelling Acknowledgment This research work is supported by the ESPA Operational Programme, Education and Life Long Learning, Students Practical Placement Initiative. References [1] Alves, T.M., Kokinou, E. and Zodiatis, G.: 'A three-step model to assess shoreline and offshore susceptibility to oil spills: The South Aegean (Crete) as an analogue for confined marine basins', Marine Pollution Bulletin, In Press, 2014 [2] Ciappa, A., Costabile, S.: 'Oil spill hazard assessment using a reverse trajectory method for the Egadi marine protected area (Central Mediterranean Sea)', Marine Pollution Bulletin, vol. 84 (1-2), pp. 44-55, 2014 [3] Ganas, A., Karastathis, V., Moshou, A., Valkaniotis, S., Mouzakiotis, E. and Papathanassiou, G.: 'Aftershock relocation and frequency-size distribution, stress inversion and seismotectonic setting of the 7 August 2013 M=5.4 earthquake in Kallidromon Mountain, central Greece', Tectonophysics, vol. 617, pp. 101-113, 2014 [4] Maravelakis, E., Bilalis, N., Mantzorou, I., Konstantaras, A. and Antoniadis, A.: '3D modelling of the oldest olive tree of the world', International Journal Of Computational Engineering Research, vol. 2 (2), pp. 340-347, 2012 [5] Konstantaras, A., Katsifarakis, E, Maravelakis, E, Skounakis, E, Kokkinos, E. and Karapidakis, E.: 'Intelligent spatial-clustering of seismicity in the vicinity of the Hellenic seismic arc', Earth Science Research, vol. 1 (2), pp. 1- 10, 2012 [6] Georgoulas, G., Konstantaras, A., Katsifarakis, E., Stylios, C., Maravelakis, E and Vachtsevanos, G.: 'Seismic-mass" density-based algorithm for spatio-temporal clustering', Expert Systems with Applications, vol. 40 (10), pp. 4183-4189, 2013 [7] Konstantaras, A.: 'Classification of Distinct Seismic Regions and Regional Temporal Modelling of Seismicity in the Vicinity of the Hellenic Seismic Arc', Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of', vol. 99, pp. 1-7, 2013

  2. Complex Rayleigh Waves Produced by Shallow Sedimentary Basins and their Potential Effects on Mid-Rise Buildings

    NASA Astrophysics Data System (ADS)

    Kohler, M. D.; Castillo, J.; Massari, A.; Clayton, R. W.

    2017-12-01

    Earthquake-induced motions recorded by spatially dense seismic arrays in buildings located in the northern Los Angeles basin suggest the presence of complex, amplified surface wave effects on the seismic demand of mid-rise buildings. Several moderate earthquakes produced large-amplitude, seismic energy with slow shear-wave velocities that cannot be explained or accurately modeled by any published 3D seismic velocity models or by Vs30 values. Numerical experiments are conducted to determine if sedimentary basin features are responsible for these rarely modeled and poorly documented contributions to seismic demand computations. This is accomplished through a physics-based wave propagation examination of the effects of different sedimentary basin geometries on the nonlinear response of a mid-rise structural model based on an existing, instrumented building. Using two-dimensional finite-difference predictive modeling, we show that when an earthquake focal depth is near the vertical edge of an elongated and relatively shallow sedimentary basin, dramatically amplified and complex surface waves are generated as a result of the waveguide effect introduced by this velocity structure. In addition, for certain source-receiver distances and basin geometries, body waves convert to secondary Rayleigh waves that propagate both at the free-surface interface and along the depth interface of the basin that show up as multiple large-amplitude arrivals. This study is motivated by observations from the spatially dense, high-sample-rate acceleration data recorded by the Community Seismic Network, a community-hosted strong-motion network, currently consisting of hundreds of sensors located in the southern California area. The results provide quantitative insight into the causative relationship between a sedimentary basin shape and the generation of Rayleigh waves at depth, surface waves at the free surface, scattered seismic energy, and the sensitivity of building responses to each of these.

  3. Wavelet extractor: A Bayesian well-tie and wavelet extraction program

    NASA Astrophysics Data System (ADS)

    Gunning, James; Glinsky, Michael E.

    2006-06-01

    We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.

  4. Scenarios for earthquake-generated tsunamis on a complex tectonic area of diffuse deformation and low velocity: The Alboran Sea, Western Mediterranean

    USGS Publications Warehouse

    Alvarez-Gomez, J. A.; Aniel-Quiroga, I.; Gonzalez, M.; Olabarrieta, Maitane; Carreno, E.

    2011-01-01

    The tsunami impact on the Spanish and North African coasts of the Alboran Sea generated by several reliable seismic tsunamigenic sources in this area was modeled. The tectonic setting is complex and a study of the potential sources from geological data is basic to obtain probable source characteristics. The tectonic structures considered in this study as potentially tsunamigenic are: the Alboran Ridge associated structures, the Carboneras Fault Zone and the Yusuf Fault Zone. We characterized 12 probable tsunamigenic seismic sources in the Alboran Basin based on the results of recent oceanographical studies. The strain rate in the area is low and therefore its seismicity is moderate and cannot be used to infer characteristics of the major seismic sources. These sources have been used as input for the numerical simulation of the wave propagation, based on the solution of the nonlinear shallow water equations through a finite-difference technique. We calculated the Maximum Wave Elevations, and Tsunami Travel Times using the numerical simulations. The results are shown as maps and profiles along the Spanish and African coasts. The sources associated with the Alboran Ridge show the maximum potential to generate damaging tsunamis, with maximum wave elevations in front of the coast exceeding 1.5 m. The Carboneras and Yusuf faults are not capable of generating disastrous tsunamis on their own, although their proximity to the coast could trigger landslides and associated sea disturbances. The areas which are more exposed to the impact of tsunamis generated in the Alboran Sea are the Spanish coast between Malaga and Adra, and the African coast between Alhoceima and Melilla.

  5. Variations in pockmark composition at the Vestnesa Ridge: Insights from marine controlled source electromagnetic and seismic data

    NASA Astrophysics Data System (ADS)

    Goswami, Bedanta K.; Weitemeyer, Karen A.; Bünz, Stefan; Minshull, Timothy A.; Westbrook, Graham K.; Ker, Stephan; Sinha, Martin C.

    2017-03-01

    The Vestnesa Ridge marks the northern boundary of a known submarine gas hydrate province in the west Svalbard margin. Several seafloor pockmarks at the eastern segment of the ridge are sites of active methane venting. Until recently, seismic reflection data were the main tool for imaging beneath the ridge. Coincident controlled source electromagnetic (CSEM), high-resolution two-dimensional (2-D) airgun, sweep frequency SYSIF, and three-dimensional (3-D) p-cable seismic reflection data were acquired at the south-eastern part of the ridge between 2011 and 2013. The CSEM and seismic data contain profiles across and along the ridge, passing several active and inactive pockmarks. Joint interpretation of resistivity models obtained from CSEM and seismic reflection data provides new information regarding the fluid composition beneath the pockmarks. There is considerable variation in transverse resistance and seismic reflection characteristics of the gas hydrate stability zone (GHSZ) between the ridge flanks and chimneys beneath pockmarks. Layered seismic reflectors on the flanks are associated with around 300 Ωm2 transverse resistance, whereas the seismic reflectors within the chimneys exhibit amplitude blanking and chaotic patterns. The transverse resistance of the GHSZ within the chimneys vary between 400 and 1200 Ωm2. Variance attributes obtained from the 3-D p-cable data also highlight faults and chimneys, which coincide with the resistivity anomalies. Based on the joint data interpretation, widespread gas hydrate presence is likely at the ridge, with both hydrates and free gas contained within the faults and chimneys. However, at the active chimneys the effect of gas likely dominates the resistive anomalies.

  6. Using Earthquake Analysis to Expand the Oklahoma Fault Database

    NASA Astrophysics Data System (ADS)

    Chang, J. C.; Evans, S. C.; Walter, J. I.

    2017-12-01

    The Oklahoma Geological Survey (OGS) is compiling a comprehensive Oklahoma Fault Database (OFD), which includes faults mapped in OGS publications, university thesis maps, and industry-contributed shapefiles. The OFD includes nearly 20,000 fault segments, but the work is far from complete. The OGS plans on incorporating other sources of data into the OFD, such as new faults from earthquake sequence analyses, geologic field mapping, active-source seismic surveys, and potential fields modeling. A comparison of Oklahoma seismicity and the OFD reveals that earthquakes in the state appear to nucleate on mostly unmapped or unknown faults. Here, we present faults derived from earthquake sequence analyses. From 2015 to present, there has been a five-fold increase in realtime seismic stations in Oklahoma, which has greatly expanded and densified the state's seismic network. The current seismic network not only improves our threshold for locating weaker earthquakes, but also allows us to better constrain focal plane solutions (FPS) from first motion analyses. Using nodal planes from the FPS, HypoDD relocation, and historic seismic data, we can elucidate these previously unmapped seismogenic faults. As the OFD is a primary resource for various scientific investigations, the inclusion of seismogenic faults improves further derivative studies, particularly with respect to seismic hazards. Our primal focus is on four areas of interest, which have had M5+ earthquakes in recent Oklahoma history: Pawnee (M5.8), Prague (M5.7), Fairview (M5.1), and Cushing (M5.0). Subsequent areas of interest will include seismically active data-rich areas, such as the central and northcentral parts of the state.

  7. New constraints on the magmatic system beneath Newberry Volcano from the analysis of active and passive source seismic data, and ambient noise

    NASA Astrophysics Data System (ADS)

    Heath, B.; Toomey, D. R.; Hooft, E. E. E.

    2014-12-01

    Magmatic systems beneath arc-volcanoes are often poorly resolved by seismic imaging due to the small spatial scale and large magnitude of crustal heterogeneity in combination with field experiments that sparsely sample the wavefield. Here we report on our continued analysis of seismic data from a line of densely-spaced (~300 m), three-component seismometers installed on Newberry Volcano in central Oregon for ~3 weeks; the array recorded an explosive shot, ~20 teleseismic events, and ambient noise. By jointly inverting both active and passive-source travel time data, the resulting tomographic image reveals a more detailed view of the presumed rhyolitic magma chamber at ~3-5 km depth, previously imaged by Achauer et al. (1988) and Beachly et al. (2012). The magma chamber is elongated perpendicular to the trend of extensional faulting and encircled by hypocenters of small (M < 2) earthquakes located by PNSN. We also model teleseismic waveforms using a 2-D synthetic seismogram code to recreate anomalous amplitudes observed in the P-wave coda for sites within the caldera. Autocorrelation of ambient noise data also reveals large amplitude waveforms for a small but spatially grouped set of stations, also located within the caldera. On the basis of these noise observations and 2-D synthetic models, which both require slow seismic speeds at depth, we conclude that our tomographic model underestimates low-velocity anomalies associated with the inferred crustal magma chamber; this is due in large part to wavefront healing, which reduces observed travel time anomalies, and regularization constraints, which minimize model perturbations. Only by using various methods that interrogate different aspects of the seismic data are we able to more realistically constrain the complicated, heterogeneous volcanic system. In particular, modeling of waveform characteristics provides a better measure of the spatial scale and magnitude of crustal velocities near magmatic systems.

  8. Stress concentration on Intraplate Seismicity: Numerical Modeling of Slab-released Fluids in the New Madrid Seismic Zone

    NASA Astrophysics Data System (ADS)

    Saxena, A.; Choi, E.; Powell, C. A.

    2017-12-01

    The mechanism behind the seismicity of the New Madrid Seismic Zone (NMSZ), the major intraplate earthquake source in the Central and Eastern US (CEUS), is still debated but new insights are being provided by recent tomographic studies involving USArray. A high-resolution tomography study by Nyamwandha et al. (2016) in the NMSZ indicates the presence of low (3 % - 5 %) upper mantle Vp and Vs anomalies in the depth range 100 to 250 km. The elevated anomaly magnitudes are difficult to explain by temperature alone. As the low-velocity anomalies beneath the northeast China are attributed to fluids released from the stagnant Pacific slab, water released from the stagnant Laramide Slab, presently located at transition zone depths beneath the CEUS might be contributing to the low velocity features in this region's upper mantle. Here, we investigate the potential impact of the slab-released fluids on the stresses at seismogenic depths using numerical modeling. We convert the tomographic results into temperature field under various assumed values of spatially uniform water content. In more realistic cases, water content is added only when the converted temperature exceeds the melting temperature of olivine. Viscosities are then computed based on the temperature and water content and given to our geodynamic models created by Pylith, an open source software for crustal dynamics. The model results show that increasing water content weakens the upper mantle more than temperature alone and thus elevates the differential stress in the upper crust. These results can better explain the tomography results and seismicity without invoking melting. We also invert the tomography results for volume fraction of orthopyroxene and temperature and compare the resultant stresses with those for pure olivine. To enhance the reproducibility, selected models in this study will be made available in the form of sharable and reproducible packages enabled by EarthCube Building block project, GeoTrust.

  9. Multiple plates subducting beneath Colombia, as illuminated by seismicity and velocity from the joint inversion of seismic and gravity data

    DOE PAGES

    Syracuse, Ellen M.; Maceira, Monica; Prieto, German A.; ...

    2016-04-12

    Subduction beneath the northernmost Andes in Colombia is complex. Based on seismicity distributions, multiple segments of slab appear to be subducting, and arc volcanism ceases north of 5° N. Here, we illuminate the subduction system through hypocentral relocations and Vp and Vs models resulting from the joint inversion of local body wave arrivals, surface wave dispersion measurements, and gravity data. The simultaneous use of multiple data types takes advantage of the differing sensitivities of each data type, resulting in velocity models that have improved resolution at both shallower and deeper depths than would result from traditional travel time tomography alone.more » The relocated earthquake dataset and velocity model clearly indicate a tear in the Nazca slab at 5° N, corresponding to a 250-km shift in slab seismicity and the termination of arc volcanism. North of this tear, the slab is flat, and it comprises slabs of two sources: the Nazca and Caribbean plates. The Bucaramanga nest, a small region of among the most intense intermediate-depth seismicity globally, is associated with the boundary between these two plates and possibly with a zone of melting or elevated water content, based on reduced Vp and increased Vp/Vs. As a result, we also use relocated seismicity to identify two new faults in the South American plate, one related to plate convergence and one highlighted by induced seismicity.« less

  10. Probabilistic Seismic Hazard Assessment of the Chiapas State (SE Mexico)

    NASA Astrophysics Data System (ADS)

    Rodríguez-Lomelí, Anabel Georgina; García-Mayordomo, Julián

    2015-04-01

    The Chiapas State, in southeastern Mexico, is a very active seismic region due to the interaction of three tectonic plates: Northamerica, Cocos and Caribe. We present a probabilistic seismic hazard assessment (PSHA) specifically performed to evaluate seismic hazard in the Chiapas state. The PSHA was based on a composited seismic catalogue homogenized to Mw and was used a logic tree procedure for the consideration of different seismogenic source models and ground motion prediction equations (GMPEs). The results were obtained in terms of peak ground acceleration as well as spectral accelerations. The earthquake catalogue was compiled from the International Seismological Center and the Servicio Sismológico Nacional de México sources. Two different seismogenic source zones (SSZ) models were devised based on a revision of the tectonics of the region and the available geomorphological and geological maps. The SSZ were finally defined by the analysis of geophysical data, resulting two main different SSZ models. The Gutenberg-Richter parameters for each SSZ were calculated from the declustered and homogenized catalogue, while the maximum expected earthquake was assessed from both the catalogue and geological criteria. Several worldwide and regional GMPEs for subduction and crustal zones were revised. For each SSZ model we considered four possible combinations of GMPEs. Finally, hazard was calculated in terms of PGA and SA for 500-, 1000-, and 2500-years return periods for each branch of the logic tree using the CRISIS2007 software. The final hazard maps represent the mean values obtained from the two seismogenic and four attenuation models considered in the logic tree. For the three return periods analyzed, the maps locate the most hazardous areas in the Chiapas Central Pacific Zone, the Pacific Coastal Plain and in the Motagua and Polochic Fault Zone; intermediate hazard values in the Chiapas Batholith Zone and in the Strike-Slip Faults Province. The hazard decreases towards the northeast across the Reverse Faults Province and up to Yucatan Platform, where the lowest values are reached. We also produced uniform hazard spectra (UHS) for the three main cities of Chiapas. Tapachula city presents the highest spectral accelerations, while Tuxtla Gutierrez and San Cristobal de las Casas cities show similar values. We conclude that seismic hazard in Chiapas is chiefly controlled by the subduction of the Cocos beneath Northamerica and Caribe tectonic plates, that makes the coastal areas the most hazardous. Additionally, the Motagua and Polochic Fault Zones are also important, increasing the hazard particularly in southeastern Chiapas.

  11. Compilation of Surface Creep on California Faults and Comparison of WGCEP 2007 Deformation Model to Pacific-North American Plate Motion

    USGS Publications Warehouse

    Wisely, Beth A.; Schmidt, David A.; Weldon, Ray J.

    2008-01-01

    This Appendix contains 3 sections that 1) documents published observations of surface creep on California faults, 2) constructs line integrals across the WG-07 deformation model to compare to the Pacific ? North America plate motion, and 3) constructs strain tensors of volumes across the WG-07 deformation model to compare to the Pacific ? North America plate motion. Observation of creep on faults is a critical part of our earthquake rupture model because if a fault is observed to creep the moment released as earthquakes is reduced from what would be inferred directly from the fault?s slip rate. There is considerable debate about how representative creep measured at the surface during a short time period is of the whole fault surface through the entire seismic cycle (e.g. Hudnut and Clark, 1989). Observationally, it is clear that the amount of creep varies spatially and temporally on a fault. However, from a practical point of view a single creep rate is associated with a fault section and the reduction in seismic moment generated by the fault is accommodated in seismic hazard models by reducing the surface area that generates earthquakes or by reducing the slip rate that is converted into seismic energy. WG-07 decided to follow the practice of past Working Groups and the National Seismic Hazard Map and used creep rate (where it was judged to be interseismic, see Table P1) to reduce the area of the fault surface that generates seismic events. In addition to following past practice, this decision allowed the Working Group to use a reduction of slip rate as a separate factor to accommodate aftershocks, post seismic slip, possible aseismic permanent deformation along fault zones and other processes that are inferred to affect the entire surface area of a fault, and thus are better modeled as a reduction in slip rate. C-zones are also handled by a reduction in slip rate, because they are inferred to include regions of widely distributed shear that is not completely expressed as earthquakes large enough to model. Because the ratio of the rate of creep relative to the total slip rate is often used to infer the average depth of creep, the ?depth? of creep can be calculated and used to reduce the surface area of a fault that generates earthquakes in our model. This reduction of surface area of rupture is described by an ?aseismicity factor,? assigned to each creeping fault in Appendix A. An aseismicity factor of less than 1 is only assigned to faults that are inferred to creep during the entire interseismic period. A single aseismicity factor was chosen for each section of the fault that creeps by expert opinion from the observations documented here. Uncertainties were not determined for the aseismicity factor, and thus it represents an unmodeled (and difficult to model) source of error. This Appendix simply provides the documentation of known creep, the type and precision of its measurement, and attempts to characterize the creep as interseismic, afterslip, transient or triggered. Parts 2 and 3 of this Appendix compare the WG-07 deformation model and the seismic source model it generates to the strain generated by the Pacific - North American plate motion. The concept is that plate motion generates essentially all of the elastic strain in the vicinity of the plate boundary that can be released as earthquakes. Adding up the slip rates on faults and all others sources of deformation (such as C-zones and distributed ?background? seismicity) should approximately yield the plate motion. This addition is usually accomplished by one of four approaches: 1) line integrals that sum deformation along discrete paths through the deforming zone between the two plates, 2) seismic moment tensors that add up seismic moment of a representative set of earthquakes generated by a crustal volume spanning the plate boundary, 3) strain tensors generated by adding up the strain associated with all of the faults in a crustal volume spanning the plate

  12. Interactions and triggering in a 3D rate and state asperity model

    NASA Astrophysics Data System (ADS)

    Dublanchet, P.; Bernard, P.

    2012-12-01

    Precise relocation of micro-seismicity and careful analysis of seismic source parameters have progressively imposed the concept of seismic asperities embedded in a creeping fault segment as being one of the most important aspect that should appear in a realistic representation of micro-seismic sources. Another important issue concerning micro-seismic activity is the existence of robust empirical laws describing the temporal and magnitude distribution of earthquakes, such as the Omori law, the distribution of inter-event time and the Gutenberg-Richter law. In this framework, this study aims at understanding statistical properties of earthquakes, by generating synthetic catalogs with a 3D, quasi-dynamic continuous rate and state asperity model, that takes into account a realistic geometry of asperities. Our approach contrasts with ETAS models (Kagan and Knopoff, 1981) usually implemented to produce earthquake catalogs, in the sense that the non linearity observed in rock friction experiments (Dieterich, 1979) is fully taken into account by the use of rate and state friction law. Furthermore, our model differs from discrete models of faults (Ziv and Cochard, 2006) because the continuity allows us to define realistic geometries and distributions of asperities by the assembling of sub-critical computational cells that always fail in a single event. Moreover, this model allows us to adress the question of the influence of barriers and distribution of asperities on the event statistics. After recalling the main observations of asperities in the specific case of Parkfield segment of San-Andreas Fault, we analyse earthquake statistical properties computed for this area. Then, we present synthetic statistics obtained by our model that allow us to discuss the role of barriers on clustering and triggering phenomena among a population of sources. It appears that an effective size of barrier, that depends on its frictional strength, controls the presence or the absence, in the synthetic catalog, of statistical laws that are similar to what is observed for real earthquakes. As an application, we attempt to draw a comparison between synthetic statistics and the observed statistics of Parkfield in order to characterize what could be a realistic frictional model of Parkfield area. More generally, we obtained synthetic statistical properties that are in agreement with power-law decays characterized by exponents that match the observations at a global scale, showing that our mechanical model is able to provide new insights into the understanding of earthquake interaction processes in general.

  13. Seismic tomographic imaging of P- and S-waves velocity perturbations in the upper mantle beneath Iran

    NASA Astrophysics Data System (ADS)

    Alinaghi, Alireza; Koulakov, Ivan; Thybo, Hans

    2007-06-01

    The inverse tomography method has been used to study the P- and S-waves velocity structure of the crust and upper mantle underneath Iran. The method, based on the principle of source-receiver reciprocity, allows for tomographic studies of regions with sparse distribution of seismic stations if the region has sufficient seismicity. The arrival times of body waves from earthquakes in the study area as reported in the ISC catalogue (1964-1996) at all available epicentral distances are used for calculation of residual arrival times. Prior to inversion we have relocated hypocentres based on a 1-D spherical earth's model taking into account variable crustal thickness and surface topography. During the inversion seismic sources are further relocated simultaneously with the calculation of velocity perturbations. With a series of synthetic tests we demonstrate the power of the algorithm and the data to reconstruct introduced anomalies using the ray paths of the real data set and taking into account the measurement errors and outliers. The velocity anomalies show that the crust and upper mantle beneath the Iranian Plateau comprises a low velocity domain between the Arabian Plate and the Caspian Block. This is in agreement with global tomographic models, and also tectonic models, in which active Iranian plateau is trapped between the stable Turan plate in the north and the Arabian shield in the south. Our results show clear evidence of the mainly aseismic subduction of the oceanic crust of the Oman Sea underneath the Iranian Plateau. However, along the Zagros suture zone, the subduction pattern is more complex than at Makran where the collision of the two plates is highly seismic.

  14. Model uncertainties of the 2002 update of California seismic hazard maps

    USGS Publications Warehouse

    Cao, T.; Petersen, M.D.; Frankel, A.D.

    2005-01-01

    In this article we present and explore the source and ground-motion model uncertainty and parametric sensitivity for the 2002 update of the California probabilistic seismic hazard maps. Our approach is to implement a Monte Carlo simulation that allows for independent sampling from fault to fault in each simulation. The source-distance dependent characteristics of the uncertainty maps of seismic hazard are explained by the fundamental uncertainty patterns from four basic test cases, in which the uncertainties from one-fault and two-fault systems are studied in detail. The California coefficient of variation (COV, ratio of the standard deviation to the mean) map for peak ground acceleration (10% of exceedance in 50 years) shows lower values (0.1-0.15) along the San Andreas fault system and other class A faults than along class B faults (0.2-0.3). High COV values (0.4-0.6) are found around the Garlock, Anacapa-Dume, and Palos Verdes faults in southern California and around the Maacama fault and Cascadia subduction zone in northern California.

  15. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and engineering an appropriate data storage solution. The present pilot version of the service implements noise source maps for Switzerland. Extension of the solution to Central Europe is planned for the next project phase.

  16. A Hammer-Impact, Aluminum, Shear-Wave Seismic Source

    USGS Publications Warehouse

    Haines, Seth

    2007-01-01

    Near-surface seismic surveys often employ hammer impacts to create seismic energy. Shear-wave surveys using horizontally polarized waves require horizontal hammer impacts against a rigid object (the source) that is coupled to the ground surface. I have designed, built, and tested a source made out of aluminum and equipped with spikes to improve coupling. The source is effective in a variety of settings, and it is relatively simple and inexpensive to build.

  17. A probabilistic framework for single-station location of seismicity on Earth and Mars

    NASA Astrophysics Data System (ADS)

    Böse, M.; Clinton, J. F.; Ceylan, S.; Euchner, F.; van Driel, M.; Khan, A.; Giardini, D.; Lognonné, P.; Banerdt, W. B.

    2017-01-01

    Locating the source of seismic energy from a single three-component seismic station is associated with large uncertainties, originating from challenges in identifying seismic phases, as well as inevitable pick and model uncertainties. The challenge is even higher for planets such as Mars, where interior structure is a priori largely unknown. In this study, we address the single-station location problem by developing a probabilistic framework that combines location estimates from multiple algorithms to estimate the probability density function (PDF) for epicentral distance, back azimuth, and origin time. Each algorithm uses independent and complementary information in the seismic signals. Together, the algorithms allow locating seismicity ranging from local to teleseismic quakes. Distances and origin times of large regional and teleseismic events (M > 5.5) are estimated from observed and theoretical body- and multi-orbit surface-wave travel times. The latter are picked from the maxima in the waveform envelopes in various frequency bands. For smaller events at local and regional distances, only first arrival picks of body waves are used, possibly in combination with fundamental Rayleigh R1 waveform maxima where detectable; depth phases, such as pP or PmP, help constrain source depth and improve distance estimates. Back azimuth is determined from the polarization of the Rayleigh- and/or P-wave phases. When seismic signals are good enough for multiple approaches to be used, estimates from the various methods are combined through the product of their PDFs, resulting in an improved event location and reduced uncertainty range estimate compared to the results obtained from each algorithm independently. To verify our approach, we use both earthquake recordings from existing Earth stations and synthetic Martian seismograms. The Mars synthetics are generated with a full-waveform scheme (AxiSEM) using spherically-symmetric seismic velocity, density and attenuation models of Mars that incorporate existing knowledge of Mars internal structure, and include expected ambient and instrumental noise. While our probabilistic framework is developed mainly for application to Mars in the context of the upcoming InSight mission, it is also relevant for locating seismic events on Earth in regions with sparse instrumentation.

  18. Using a coupled hydro-mechanical fault model to better understand the risk of induced seismicity in deep geothermal projects

    NASA Astrophysics Data System (ADS)

    Abe, Steffen; Krieger, Lars; Deckert, Hagen

    2017-04-01

    The changes of fluid pressures related to the injection of fluids into the deep underground, for example during geothermal energy production, can potentially reactivate faults and thus cause induced seismic events. Therefore, an important aspect in the planning and operation of such projects, in particular in densely populated regions such as the Upper Rhine Graben in Germany, is the estimation and mitigation of the induced seismic risk. The occurrence of induced seismicity depends on a combination of hydraulic properties of the underground, mechanical and geometric parameters of the fault, and the fluid injection regime. In this study we are therefore employing a numerical model to investigate the impact of fluid pressure changes on the dynamics of the faults and the resulting seismicity. The approach combines a model of the fluid flow around a geothermal well based on a 3D finite difference discretisation of the Darcy-equation with a 2D block-slider model of a fault. The models are coupled so that the evolving pore pressure at the relevant locations of the hydraulic model is taken into account in the calculation of the stick-slip dynamics of the fault model. Our modelling approach uses two subsequent modelling steps. Initially, the fault model is run by applying a fixed deformation rate for a given duration and without the influence of the hydraulic model in order to generate the background event statistics. Initial tests have shown that the response of the fault to hydraulic loading depends on the timing of the fluid injection relative to the seismic cycle of the fault. Therefore, multiple snapshots of the fault's stress- and displacement state are generated from the fault model. In a second step, these snapshots are then used as initial conditions in a set of coupled hydro-mechanical model runs including the effects of the fluid injection. This set of models is then compared with the background event statistics to evaluate the change in the probability of seismic events. The event data such as location, magnitude, and source characteristics can be used as input for numerical wave propagation models. This allows the translation of seismic event statistics generated by the model into ground shaking probabilities.

  19. Ground Motion Prediction for M7+ scenarios on the San Andreas Fault using the Virtual Earthquake Approach

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.

    2013-05-01

    There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.

  20. Near Field Modeling for the Maule Tsunami from DART, GPS and Finite Fault Solutions (Invited)

    NASA Astrophysics Data System (ADS)

    Arcas, D.; Chamberlin, C.; Lagos, M.; Ramirez-Herrera, M.; Tang, L.; Wei, Y.

    2010-12-01

    The earthquake and tsunami of February, 27, 2010 in central Chile has rekindled an interest in developing techniques to predict the impact of near field tsunamis along the Chilean coastline. Following the earthquake, several initiatives were proposed to increase the density of seismic, pressure and motion sensors along the South American trench, in order to provide field data that could be used to estimate tsunami impact on the coast. However, the precise use of those data in the elaboration of a quantitative assessment of coastal tsunami damage has not been clarified. The present work makes use of seismic, Deep-ocean Assessment and Reporting of Tsunamis (DART®) systems, and GPS measurements obtained during the Maule earthquake to initiate a number of tsunami inundation models along the rupture area by expressing different versions of the seismic crustal deformation in terms of NOAA’s tsunami unit source functions. Translation of all available real-time data into a feasible tsunami source is essential in near-field tsunami impact prediction in which an impact assessment must be generated under very stringent time constraints. Inundation results from each different source are then contrasted with field and tide gauge data by comparing arrival time, maximum wave height, maximum inundation and tsunami decay rate, using field data collected by the authors.

  1. The Slip Behavior and Source Parameters for Spontaneous Slip Events on Rough Faults Subjected to Slow Tectonic Loading

    NASA Astrophysics Data System (ADS)

    Tal, Yuval; Hager, Bradford H.

    2018-02-01

    We study the response to slow tectonic loading of rough faults governed by velocity weakening rate and state friction, using a 2-D plane strain model. Our numerical approach accounts for all stages in the seismic cycle, and in each simulation we model a sequence of two earthquakes or more. We focus on the global behavior of the faults and find that as the roughness amplitude, br, increases and the minimum wavelength of roughness decreases, there is a transition from seismic slip to aseismic slip, in which the load on the fault is released by more slip events but with lower slip rate, lower seismic moment per unit length, M0,1d, and lower average static stress drop on the fault, Δτt. Even larger decreases with roughness are observed when these source parameters are estimated only for the dynamic stage of the rupture. For br ≤ 0.002, the source parameters M0,1d and Δτt decrease mutually and the relationship between Δτt and the average fault strain is similar to that of a smooth fault. For faults with larger values of br that are completely ruptured during the slip events, the average fault strain generally decreases more rapidly with roughness than Δτt.

  2. Stress modulation of earthquakes: A study of long and short period stress perturbations and the crustal response

    NASA Astrophysics Data System (ADS)

    Johnson, Christopher W.

    Decomposing fault mechanical processes advances our understanding of active fault systems and properties of the lithosphere, thereby increasing the effectiveness of seismic hazard assessment and preventative measures implemented in urban centers. Along plate boundaries earthquakes are inevitable as tectonic forces reshape the Earth's surface. Earthquakes, faulting, and surface displacements are related systems that require multidisciplinary approaches to characterize deformation in the lithosphere. Modern geodetic instrumentation can resolve displacements to millimeter precision and provide valuable insight into secular deformation in near real-time. The expansion of permanent seismic networks as well as temporary deployments allow unprecedented detection of microseismic events that image fault interfaces and fracture networks in the crust. The research presented in this dissertation is at the intersection of seismology and geodesy to study the Earth's response to transient deformation and explores research questions focusing on earthquake triggering, induced seismicity, and seasonal loading while utilizing seismic data, geodetic data, and modeling tools. The focus is to quantify stress changes in the crust, explore seismicity rate variations and migration patterns, and model crustal deformation in order to characterize the evolving state of stress on faults and the migration of fluids in the crust. The collection of problems investigated all investigate the question: Why do earthquakes nucleate following a low magnitude stress perturbation? Answers to this question are fundamental to understanding the time dependent failure processes of the lithosphere. Dynamic triggering is the interaction of faults and triggering of earthquakes represents stress transferring from one system to another, at both local and remote distances [Freed, 2005]. The passage of teleseismic surface waves from the largest earthquakes produce dynamic stress fields and provides a natural laboratory to explore the causal relationship between low-amplitude stress changes and dynamically triggered events. Interestingly, observations of dynamically triggered M≥5.5 earthquakes are absent in the seismic records [Johnson et al., 2015; Parsons and Velasco, 2011], which invokes questions regarding whether or not large magnitude events can be dynamically triggered. Emerging results in the literature indicate undocumented M≥5.5 events at near to intermediate distances are dynamically triggered during the passage of surface waves but are undetected by automated networks [Fan and Shearer, 2016]. This raises new questions about the amplitude and duration of dynamic stressing for large magnitude events. I used 35-years of global seismicity and find that large event rate increases only occur following a delay from the transient load, suggesting aseismic processes are associated with large magnitude triggered events. To extend this finding I investigated three cases of large magnitude delayed dynamic triggering following the M8.6 2012 Indian Ocean earthquake [Pollitz et al., 2012] by producing microseismicity catalogs and modeling the transient stresses. The results indicate immediate triggering of microseismic events that hours later culminate into a large magnitude event and support the notion that large magnitude events are triggerable by transient loading, but seismic and aseismic processes (e.g. induced creep or fluid mobilization) are contributing to the nucleation process. Open questions remain concerning the source of a nucleation delay period following a stress perturbation that require both geodetic and seismic observations to constrain the source of delayed dynamic triggering and possibly provide insight into a precursory nucleation phase. Induced seismicity has gained much attention in the past 5 years as earthquake rates in regions of low tectonic strain accumulation accelerate to unprecedented levels [Ellsworth, 2013]. The source of the seismicity is attributed to shallow fluid injection associated with energy production. As hydrocarbon extraction continues to increase in the U.S. the deformation and induced seismicity from wastewater injection is providing new avenues to explore crustal properties. The large magnitude events associated with regions of high rate injection support the notion that the crust is critically stressed. Seismic data in these areas provides the opportunity to delineate fault structures in the crust using precise earthquake locations. To augment the studies of transient loading cycles I investigated induced seismicity at The Geysers geothermal field in northern California. Using high-resolution hypocenter data I implement an epidemic type aftershock sequence (ETAS) model to develop seismicity rate time series in the active geothermal field and characterize the migration of fluids from high volume water injection. Subtle stress changes induced by thermo- and poroelastic strains trigger seismicity for 5 months after peak injection at depths 3 km below the main injection interval. This suggests vertical migration paths are maintained in the geothermal field that allows fluid propagation on annual time scales. Fully describing the migration pattern of fluids in the crust and the associated stresses are applicable to tectonic related faulting and triggered seismic activity. Seasonal hydrological loading is a source of annual periodic transient deformation that is ideal for investigating the modulation of seismicity. The initial step in exploring the modulation of seismicity is to validate that a significant annual period does exist in California earthquake records. The periodicity results [Dutilleul et al., 2015] motivate continued investigation of seismically active regions that experience significant seasonal mass loading, i.e. high precipitation and snowfall rates, to quantify the magnitude of seasonal stress changes and possible correlation with seismicity modulation. The implication of this research addresses questions concerning the strength and state of stress on faults. High-resolution water storage time series throughout California are developed using continuous GPS records. The results allow an estimation of the stress changes induced by hydrological loading, which is combined with a detailed focal mechanism analysis to characterize the modulation of seismicity. The hydrologic loading is augmented with the contribution of additional deformation sources (e.g. tidal, atmosphere, and temperature) and find that annual stress changes of 5 kPa are modulating seismicity, most notably on dip-slip structures. These observations suggest that mechanical differences exist between the vertically dipping strike-slip faults and the shallowly dipping oblique structures in California. When comparing all the annual loading cycles it is evident that future studies incorporate all the sources of solid Earth deformation to fully describe the stresses realized on fault systems that respond to seasonal loads.

  3. Earthquake Hazard and Risk in New Zealand

    NASA Astrophysics Data System (ADS)

    Apel, E. V.; Nyst, M.; Fitzenz, D. D.; Molas, G.

    2014-12-01

    To quantify risk in New Zealand we examine the impact of updating the seismic hazard model. The previous RMS New Zealand hazard model is based on the 2002 probabilistic seismic hazard maps for New Zealand (Stirling et al., 2002). The 2015 RMS model, based on Stirling et al., (2012) will update several key source parameters. These updates include: implementation a new set of crustal faults including multi-segment ruptures, updating the subduction zone geometry and reccurrence rate and implementing new background rates and a robust methodology for modeling background earthquake sources. The number of crustal faults has increased by over 200 from the 2002 model, to the 2012 model which now includes over 500 individual fault sources. This includes the additions of many offshore faults in northern, east-central, and southwest regions. We also use the recent data to update the source geometry of the Hikurangi subduction zone (Wallace, 2009; Williams et al., 2013). We compare hazard changes in our updated model with those from the previous version. Changes between the two maps are discussed as well as the drivers for these changes. We examine the impact the hazard model changes have on New Zealand earthquake risk. Considered risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the loss exceedance probability curve used by insurers to address their solvency and manage their portfolio risk. We analyze risk profile changes in areas with large population density and for structures of economic and financial importance. New Zealand is interesting in that the city with the majority of the risk exposure in the country (Auckland) lies in the region of lowest hazard, where we don't have a lot of information about the location of faults and distributed seismicity is modeled by averaged Mw-frequency relationships on area sources. Thus small changes to the background rates can have a large impact on the risk profile for the area. Wellington, another area of high exposure is particularly sensitive to how the Hikurangi subduction zone and the Wellington fault are modeled. Minor changes on these sources have substantial impacts for the risk profile of the city and the country at large.

  4. Alternative Energy Sources in Seismic Methods

    NASA Astrophysics Data System (ADS)

    Tün, Muammer; Pekkan, Emrah; Mutlu, Sunay; Ecevitoğlu, Berkan

    2015-04-01

    When the suitability of a settlement area is investigated, soil-amplification, liquefaction and fault-related hazards should be defined, and the associated risks should be clarified. For this reason, soil engineering parameters and subsurface geological structure of a new settlement area should be investigated. Especially, faults covered with quaternary alluvium; thicknesses, shear-wave velocities and geometry of subsurface sediments could lead to a soil amplification during an earthquake. Likewise, changes in shear-wave velocities along the basin are also very important. Geophysical methods can be used to determine the local soil properties. In this study, use of alternative seismic energy sources when implementing seismic reflection, seismic refraction and MASW methods in the residential areas of Eskisehir/Turkey, were discussed. Our home developed seismic energy source, EAPSG (Electrically-Fired-PS-Gun), capable to shoot 2x24 magnum shotgun cartridges at once to generate P and S waves; and our home developed WD-500 (500 kg Weight Drop) seismic energy source, mounted on a truck, were developed under a scientific research project of Anadolu University. We were able to reach up to penetration depths of 1200 m for EAPSG, and 800 m for WD-500 in our seismic reflection surveys. WD-500 seismic energy source was also used to perform MASW surveys, using 24-channel, 10 m apart, 4.5 Hz vertical geophone configuration. We were able to reach 100 m of penetration depth in MASW surveys.

  5. Seismic attenuation structure of the Seattle Basin, Washington State from explosive-source refraction data

    USGS Publications Warehouse

    Li, Q.; Wilcock, W.S.D.; Pratt, T.L.; Snelson, C.M.; Brocher, T.M.

    2006-01-01

    We used waveform data from the 1999 SHIPS (Seismic Hazard Investigation of Puget Sound) seismic refraction experiment to constrain the attenuation structure of the Seattle basin, Washington State. We inverted the spectral amplitudes of compressional- and shear-wave arrivals for source spectra, site responses, and one- and two-dimensional Q-1 models at frequencies between 1 and 40 Hz for P waves and 1 and 10 Hz for S waves. We also obtained Q-1 models from t* values calculated from the spectral slopes of P waves between 10 and 40 Hz. One-dimensional inversions show that Qp at the surface is 22 at 1 Hz, 130 at 5 Hz, and 390 at 20 Hz. The corresponding values at 18 km depth are 100, 440, and 1900. Qs at the surface is 16 and 160 at 1 Hz and 8 Hz, respectively, increasing to 80 and 500 at 18 km depth. The t* inversion yields a Qp model that is consistent with the amplitude inversions at 20 and 30 Hz. The basin geometry is clearly resolved in the t* inversion, but the amplitude inversions only imaged the basin structure after removing anomalously high-amplitude shots near Seattle. When these shots are removed, we infer that Q-1 values may be ???30% higher in the center of the basin than the one-dimensional models predict. We infer that seismic attenuation in the Seattle basin will significantly reduce ground motions at frequencies at and above 1 Hz, partially countering amplification effects within the basin.

  6. Necessity of using heterogeneous ellipsoidal Earth model with terrain to calculate co-seismic effect

    NASA Astrophysics Data System (ADS)

    Cheng, Huihong; Zhang, Bei; Zhang, Huai; Huang, Luyuan; Qu, Wulin; Shi, Yaolin

    2016-04-01

    Co-seismic deformation and stress changes, which reflect the elasticity of the earth, are very important in the earthquake dynamics, and also to other issues, such as the evaluation of the seismic risk, fracture process and triggering of earthquake. Lots of scholars have researched the dislocation theory and co-seismic deformation and obtained the half-space homogeneous model, half-space stratified model, spherical stratified model, and so on. Especially, models of Okada (1992) and Wang (2003, 2006) are widely applied in the research of calculating co-seismic and post-seismic effects. However, since both semi-infinite space model and layered model do not take the role of the earth curvature or heterogeneity or topography into consideration, there are large errors in calculating the co-seismic displacement of a great earthquake in its impacted area. Meanwhile, the computational methods of calculating the co-seismic strain and stress are different between spherical model and plane model. Here, we adopted the finite element method which could well deal with the complex characteristics (such as anisotropy, discontinuities) of rock and different conditions. We use the mash adaptive technique to automatically encrypt the mesh at the fault and adopt the equivalent volume force replace the dislocation source, which can avoid the difficulty in handling discontinuity surface with conventional (Zhang et al., 2015). We constructed an earth model that included earth's layered structure and curvature, the upper boundary was set as a free surface and the core-mantle boundary was set under buoyancy forces. Firstly, based on the precision requirement, we take a testing model - - a strike-slip fault (the length of fault is 500km and the width is 50km, and the slippage is 10m) for example. Because of the curvature of the Earth, some errors certainly occur in plane coordinates just as previous studies (Dong et al., 2014; Sun et al., 2012). However, we also found that: 1) the co-seismic displacement and strain are no longer symmetric with different latitudes in plane model while always theoretically symmetrical in spherical model. 2) The errors of co-seismic strain will be increased when using corresponding formulas in plane coordinate. When we set the strike-slip fault along the equator, the maximum relative error can reach to several tens of thousand times in high latitude while 30 times near the fault. 3) The style of strain changes are eight petals while the errors are four petals, and apparent distortion at high latitudes. Furthermore, the influence of the earth's ellipticity and heterogeneity and terrain were calculated respectively. Especially the effect of terrain, which induced huge differences, should not be overlooked during the co-seismic calculations. Finally, taking all those affecting factors into account, we calculated the co-seismic effect of the 2008 Wenchuan earthquake and its adjacent area and faults using the heterogeneous ellipsoidal Earth model with terrain.

  7. Micro-seismic waveform matching inversion based on gravitational search algorithm and parallel computation

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Xing, H. L.

    2016-12-01

    Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation

  8. Comparison of the seafloor displacement from uniform and non-uniform slip models on tsunami simulation of the 2011 Tohoku-Oki earthquake

    NASA Astrophysics Data System (ADS)

    Ulutas, Ergin

    2013-01-01

    The numerical simulations of recent tsunami caused by 11 March 2011 off-shore Pacific coast of Tohoku-Oki earthquake (Mw 9.0) using diverse co-seismic source models have been performed. Co-seismic source models proposed by various observational agencies and scholars are further used to elucidate the effects of uniform and non-uniform slip models on tsunami generation and propagation stages. Non-linear shallow water equations are solved with a finite difference scheme, using a computational grid with different cell sizes over GEBCO30 bathymetry data. Overall results obtained and reported by various tsunami simulation models are compared together with the available real-time kinematic global positioning system (RTK-GPS) buoys, cabled deep ocean-bottom pressure gauges (OBPG), and Deep-ocean Assessment and Reporting of Tsunami (DART) buoys. The purpose of this study is to provide a brief overview of major differences between point-source and finite-fault methodologies on generation and simulation of tsunamis. Tests of the assumptions of uniform and non-uniform slip models designate that the average uniform slip models may be used for the tsunami simulations off-shore, and far from the source region. Nevertheless, the heterogeneities of the slip distribution within the fault plane are substantial for the wave amplitude in the near field which should be investigated further.

  9. Probabilistic seismic hazard assessments of Sabah, east Malaysia: accounting for local earthquake activity near Ranau

    NASA Astrophysics Data System (ADS)

    Khalil, Amin E.; Abir, Ismail A.; Ginsos, Hanteh; Abdel Hafiez, Hesham E.; Khan, Sohail

    2018-02-01

    Sabah state in eastern Malaysia, unlike most of the other Malaysian states, is characterized by common seismological activity; generally an earthquake of moderate magnitude is experienced at an interval of roughly every 20 years, originating mainly from two major sources, either a local source (e.g. Ranau and Lahad Dato) or a regional source (e.g. Kalimantan and South Philippines subductions). The seismicity map of Sabah shows the presence of two zones of distinctive seismicity, these zones are near Ranau (near Kota Kinabalu) and Lahad Datu in the southeast of Sabah. The seismicity record of Ranau begins in 1991, according to the international seismicity bulletins (e.g. United States Geological Survey and the International Seismological Center), and this short record is not sufficient for seismic source characterization. Fortunately, active Quaternary fault systems are delineated in the area. Henceforth, the seismicity of the area is thus determined as line sources referring to these faults. Two main fault systems are believed to be the source of such activities; namely, the Mensaban fault zone and the Crocker fault zone in addition to some other faults in their vicinity. Seismic hazard assessments became a very important and needed study for the extensive developing projects in Sabah especially with the presence of earthquake activities. Probabilistic seismic hazard assessments are adopted for the present work since it can provide the probability of various ground motion levels during expected from future large earthquakes. The output results are presented in terms of spectral acceleration curves and uniform hazard curves for periods of 500, 1000 and 2500 years. Since this is the first time that a complete hazard study has been done for the area, the output will be a base and standard for any future strategic plans in the area.

  10. Modeling Explosion Induced Aftershocks

    NASA Astrophysics Data System (ADS)

    Kroll, K.; Ford, S. R.; Pitarka, A.; Walter, W. R.; Richards-Dinger, K. B.

    2017-12-01

    Many traditional earthquake-explosion discrimination tools are based on properties of the seismic waveform or their spectral components. Common discrimination methods include estimates of body wave amplitude ratios, surface wave magnitude scaling, moment tensor characteristics, and depth. Such methods are limited by station coverage and noise. Ford and Walter (2010) proposed an alternate discrimination method based on using properties of aftershock sequences as a means of earthquakeexplosion differentiation. Previous studies have shown that explosion sources produce fewer aftershocks that are generally smaller in magnitude compared to aftershocks of similarly sized earthquake sources (Jarpe et al., 1994, Ford and Walter, 2010). It has also been suggested that the explosion-induced aftershocks have smaller Gutenberg- Richter b-values (Ryall and Savage, 1969) and that their rates decay faster than a typical Omori-like sequence (Gross, 1996). To discern whether these observations are generally true of explosions or are related to specific site conditions (e.g. explosion proximity to active faults, tectonic setting, crustal stress magnitudes) would require a thorough global analysis. Such a study, however, is hindered both by lack of evenly distributed explosion-sources and the availability of global seismicity data. Here, we employ two methods to test the efficacy of explosions at triggering aftershocks under a variety of physical conditions. First, we use the earthquake rate equations from Dieterich (1994) to compute the rate of aftershocks related to an explosion source assuming a simple spring-slider model. We compare seismicity rates computed with these analytical solutions to those produced by the 3D, multi-cycle earthquake simulator, RSQSim. We explore the relationship between geological conditions and the characteristics of the resulting explosion-induced aftershock sequence. We also test hypothesis that aftershock generation is dependent upon the frequency content of the passing dynamic seismic waves as suggested by Parsons and Velasco (2009). Lastly, we compare all results of explosion-induced aftershocks with aftershocks generated by similarly sized earthquake sources. Prepared by LLNL under Contract DE-AC52-07NA27344.

  11. Ground Motion Prediction Equations for Western Saudi Arabia from a Reference Model

    NASA Astrophysics Data System (ADS)

    Kiuchi, R.; Mooney, W. D.; Mori, J. J.; Zahran, H. M.; Al-Raddadi, W.; Youssef, S.

    2017-12-01

    Western Saudi Arabia is surrounded by several active seismic zones such as the Red Sea and the Gulf of Aqaba where a destructive magnitude 7.3 event occurred in 1995. Over the last decade, the Saudi Geological Survey (SGS) has deployed a dense seismic network that has made it possible to monitor seismic activity more accurately. For example, the network has detected multiple seismic swarms beneath the volcanic fields in western Saudi Arabia. The most recent damaging event was a M5.7 earthquake that occurred in 2009 at Harrat Lunayyir. In terms of seismic hazard assessment, Zahran et al. (2015; 2016) presented a Probabilistic Seismic Hazard Assessment (PSHA) for western Saudi Arabia that was developed using published Ground Motion Prediction Equations (GMPEs) from areas outside of Saudi Arabia. In this study, we consider 41 earthquakes of M 3.0 - 5.4, recorded on 124 stations of the SGS network, to create a set of 442 peak ground acceleration (PGA) and peak ground velocity (PGV) records with a range of epicentral distances from 3 km to 400 km. We use the GMPE model BSSA14 (Boore et al., 2014) as a reference model to estimate our own best-fitting coefficients from a regression analysis using the events occurred in western Saudi Arabia. For epicentral distances less than 100 km, our best fitting model has different source scaling in comparison with the GMPE of BSSA14 adjusted for the California region. In addition, our model indicates that the peak amplitudes have less attenuation in western Saudi Arabia than in California.

  12. Seismic and Infrasound Location

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arrowsmith, Stephen J.; Begnaud, Michael L.

    2014-03-19

    This presentation includes slides on Signal Propagation Through the Earth/Atmosphere Varies at Different Scales; 3D Seismic Models: RSTT; Ray Coverage (Pn); Source-Specific Station Corrections (SSSCs); RSTT Conclusions; SALSA3D (SAndia LoS Alamos) Global 3D Earth Model for Travel Time; Comparison of IDC SSSCs to RSTT Predictions; SALSA3D; Validation and Model Comparison; DSS Lines in the Siberian Platform; DSS Line CRA-4 Comparison; Travel Time Δak135; Travel Time Prediction Uncertainty; SALSA3D Conclusions; Infrasound Data Processing: An example event; Infrasound Data Processing: An example event; Infrasound Location; How does BISL work?; BISL: Application to the 2013 DPRK Test; and BISL: Ongoing Research.

  13. Heterogeneity of direct aftershock productivity of the main shock rupture

    NASA Astrophysics Data System (ADS)

    Guo, Yicun; Zhuang, Jiancang; Hirata, Naoshi; Zhou, Shiyong

    2017-07-01

    The epidemic type aftershock sequence (ETAS) model is widely used to describe and analyze the clustering behavior of seismicity. Instead of regarding large earthquakes as point sources, the finite-source ETAS model treats them as ruptures that extend in space. Each earthquake rupture consists of many patches, and each patch triggers its own aftershocks isotropically. We design an iterative algorithm to invert the unobserved fault geometry based on the stochastic reconstruction method. This model is applied to analyze the Japan Meteorological Agency (JMA) catalog during 1964-2014. We take six great earthquakes with magnitudes >7.5 after 1980 as finite sources and reconstruct the aftershock productivity patterns on each rupture surface. Comparing results from the point-source ETAS model, we find the following: (1) the finite-source model improves the data fitting; (2) direct aftershock productivity is heterogeneous on the rupture plane; (3) the triggering abilities of M5.4+ events are enhanced; (4) the background rate is higher in the off-fault region and lower in the on-fault region for the Tohoku earthquake, while high probabilities of direct aftershocks distribute all over the source region in the modified model; (5) the triggering abilities of five main shocks become 2-6 times higher after taking the rupture geometries into consideration; and (6) the trends of the cumulative background rate are similar in both models, indicating the same levels of detection ability for seismicity anomalies. Moreover, correlations between aftershock productivity and slip distributions imply that aftershocks within rupture faults are adjustments to coseismic stress changes due to slip heterogeneity.

  14. Coulomb stress transfer and accumulation on the Sagaing Fault, Myanmar, over the past 110 years and its implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Xiong, X.; Shan, B.; Zhou, Y. M.; Wei, S. J.; Li, Y. D.; Wang, R. J.; Zheng, Y.

    2017-05-01

    Myanmar is drawing rapidly increasing attention from the world for its seismic hazard. The Sagaing Fault (SF), an active right-lateral strike-slip fault passing through Myanmar, has been being the source of serious seismic damage of the country. Thus, awareness of seismic hazard assessment of this region is of pivotal significance by taking into account the interaction and migration of earthquakes with respect to time and space. We investigated a seismic series comprising10 earthquakes with M > 6.5 that occurred along the SF since 1906. The Coulomb failure stress modeling exhibits significant interactions among the earthquakes. After the 1906 earthquake, eight out of nine earthquakes occurred in the positively stress-enhanced zone of the preceding earthquakes, verifying that the hypothesis of earthquake triggering is applicable on the SF. Moreover, we identified three visible positively stressed earthquake gaps on the central and southern SF, on which seismic hazard is increased.

  15. Seismic body wave separation in volcano-tectonic activity inferred by the Convolutive Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Capuano, Paolo; De Lauro, Enza; De Martino, Salvatore; Falanga, Mariarosaria; Petrosino, Simona

    2015-04-01

    One of the main challenge in volcano-seismological literature is to locate and characterize the source of volcano/tectonic seismic activity. This passes through the identification at least of the onset of the main phases, i.e. the body waves. Many efforts have been made to solve the problem of a clear separation of P and S phases both from a theoretical point of view and developing numerical algorithms suitable for specific cases (see, e.g., Küperkoch et al., 2012). Recently, a robust automatic procedure has been implemented for extracting the prominent seismic waveforms from continuously recorded signals and thus allowing for picking the main phases. The intuitive notion of maximum non-gaussianity is achieved adopting techniques which involve higher-order statistics in frequency domain., i.e, the Convolutive Independent Component Analysis (CICA). This technique is successful in the case of the blind source separation of convolutive mixtures. In seismological framework, indeed, seismic signals are thought as the convolution of a source function with path, site and the instrument response. In addition, time-delayed versions of the same source exist, due to multipath propagation typically caused by reverberations from some obstacle. In this work, we focus on the Volcano Tectonic (VT) activity at Campi Flegrei Caldera (Italy) during the 2006 ground uplift (Ciaramella et al., 2011). The activity was characterized approximately by 300 low-magnitude VT earthquakes (Md < 2; for the definition of duration magnitude, see Petrosino et al. 2008). Most of them were concentrated in distinct seismic sequences with hypocenters mainly clustered beneath the Solfatara-Accademia area, at depths ranging between 1 and 4 km b.s.l.. The obtained results show the clear separation of P and S phases: the technique not only allows the identification of the S-P time delay giving the timing of both phases but also provides the independent waveforms of the P and S phases. This is an enormous advantage for all the problems related to the source inversion and location In addition, the VT seismicity was accompanied by hundreds of LP events (characterized by spectral peaks in the 0.5-2-Hz frequency band) that were concentrated in a 7-day interval. The main interest is to establish whether the occurrence of LPs is only limited to the swarm that reached a climax on days 26-28 October as indicated by Saccorotti et al. (2007), or a longer period is experienced. The automatically extracted waveforms with improved signal-to-noise ratio via CICA coupled with automatic phase picking allowed to compile a more complete seismic catalog and to better quantify the seismic energy release including the presence of LP events from the beginning of October until mid of November. Finally, a further check of the volcanic nature of extracted signals is achieved by looking at the seismological properties and the content of entropy held in the traces (Falanga and Petrosino 2012; De Lauro et al., 2012). Our results allow us to move towards a full description of the complexity of the source, which can be used for hazard-model development and forecast-model testing, showing an illustrative example of the applicability of the CICA method to regions with low seismicity in high ambient noise

  16. Seismic and potential field studies over the East Midlands

    NASA Astrophysics Data System (ADS)

    Kirk, Wayne John

    A seismic refraction profile was undertaken to investigate the source of an aeromagnetic anomaly located above the Widmerpool Gulf, East Midlands. Ten shots were fired into 51 stations at c. 1.5km spacing in a 70km profile during 41 days recording. The refraction data were processed using standard techniques to improve the data quality. A new filtering technique, known as Correlated Adaptive Noise Cancellation was tested on synthetic data and successfully applied to controlled source and quarry blast data. Study of strong motion data reveals that the previous method of site calibration is invalid. A new calibration technique, known as the Scaled Amplitude method is presented to provide safer charge size estimation. Raytrace modelling of the refraction data and two dimensional gravity interpretation confirms the presence of the Widmerpool Gulf but no support is found for the postulated intrusion. Two dimensional magnetic interpretation revealed that the aeromagnetic anomaly could be modelled with a Carboniferous igneous source. A Lower Palaeozoic refractor with a velocity of 6.0 km/s is identified at a maximum depth of c. 2.85km beneath the Widmerpool Gulf. Carboniferous and post-Carboniferous sediments within the gulf have velocities between 2.6-5.5 km/s with a strong vertical gradient. At the gulf margins, a refractor with a constant velocity of 5.2 km/s is identified as Dinantian limestone. A low velocity layer of proposed unaltered Lower Palaeozoics is identified beneath the limestone at the eastern edge of the Derbyshire Dome. The existence and areal extent of this layer are also determined from seismic reflection data. Image analysis of potential field data, presents a model identifying 3 structural provinces, the Midlands Microcraton, the Welsh and English Caledonides and a central region of complex linears. This model is used to explain the distribution of basement rocks determined from seismic and gravity profiles.

  17. Hazard assessment of long-period ground motions for the Nankai Trough earthquakes

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.

    2013-12-01

    We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.

  18. Seismic signature of turbulence during the 2017 Oroville Dam spillway erosion crisis

    NASA Astrophysics Data System (ADS)

    Goodling, Phillip J.; Lekic, Vedran; Prestegaard, Karen

    2018-05-01

    Knowing the location of large-scale turbulent eddies during catastrophic flooding events improves predictions of erosive scour. The erosion damage to the Oroville Dam flood control spillway in early 2017 is an example of the erosive power of turbulent flow. During this event, a defect in the simple concrete channel quickly eroded into a 47 m deep chasm. Erosion by turbulent flow is difficult to evaluate in real time, but near-channel seismic monitoring provides a tool to evaluate flow dynamics from a safe distance. Previous studies have had limited ability to identify source location or the type of surface wave (i.e., Love or Rayleigh wave) excited by different river processes. Here we use a single three-component seismometer method (frequency-dependent polarization analysis) to characterize the dominant seismic source location and seismic surface waves produced by the Oroville Dam flood control spillway, using the abrupt change in spillway geometry as a natural experiment. We find that the scaling exponent between seismic power and release discharge is greater following damage to the spillway, suggesting additional sources of turbulent energy dissipation excite more seismic energy. The mean azimuth in the 5-10 Hz frequency band was used to resolve the location of spillway damage. Observed polarization attributes deviate from those expected for a Rayleigh wave, though numerical modeling indicates these deviations may be explained by propagation up the uneven hillside topography. Our results suggest frequency-dependent polarization analysis is a promising approach for locating areas of increased flow turbulence. This method could be applied to other erosion problems near engineered structures as well as to understanding energy dissipation, erosion, and channel morphology development in natural rivers, particularly at high discharges.

  19. Seismic Hazard Analysis for Armenia and its Surrounding Areas

    NASA Astrophysics Data System (ADS)

    Klein, E.; Shen-Tu, B.; Mahdyiar, M.; Karakhanyan, A.; Pagani, M.; Weatherill, G.; Gee, R. C.

    2017-12-01

    The Republic of Armenia is located within the central part of a large, 800 km wide, intracontinental collision zone between the Arabian and Eurasian plates. Active deformation occurs along numerous structures in the form of faulting, folding, and volcanism distributed throughout the entire zone from the Bitlis-Zargos suture belt to the Greater Caucasus Mountains and between the relatively rigid Back Sea and Caspian Sea blocks without any single structure that can be claimed as predominant. In recent years, significant work has been done on mapping active faults, compiling and reviewing historic and paleoseismological studies in the region, especially in Armenia; these recent research contributions have greatly improved our understanding of the seismogenic sources and their characteristics. In this study we performed a seismic hazard analysis for Armenia and its surrounding areas using the latest detailed geological and paleoseismological information on active faults, strain rates estimated from kinematic modeling of GPS data and all available historic earthquake data. The seismic source model uses a combination of characteristic earthquake and gridded seismicity models to take advantage of the detailed knowledge of the known faults while acknowledging the distributed deformation and regional tectonic environment of the collision zone. In addition, the fault model considers earthquake ruptures that include single and multi-segment or fault rupture scenarios with earthquakes that can rupture any part of a multiple segment fault zone. The ground motion model uses a set of ground motion prediction equations (GMPE) selected from a pool of GMPEs based on the assessment of each GMPE against the available strong motion data in the region. The hazard is computed in the GEM's OpenQuake engine. We will present final hazard results and discuss the uncertainties associated with various input data and their impact on the hazard at various locations.

  20. Modelling sound propagation in the Southern Ocean to estimate the acoustic impact of seismic research surveys on marine mammals

    NASA Astrophysics Data System (ADS)

    Breitzke, Monika; Bohlen, Thomas

    2010-05-01

    Modelling sound propagation in the ocean is an essential tool to assess the potential risk of air-gun shots on marine mammals. Based on a 2.5-D finite-difference code a full waveform modelling approach is presented, which determines both sound exposure levels of single shots and cumulative sound exposure levels of multiple shots fired along a seismic line. Band-limited point source approximations of compact air-gun clusters deployed by R/V Polarstern in polar regions are used as sound sources. Marine mammals are simulated as static receivers. Applications to deep and shallow water models including constant and depth-dependent sound velocity profiles of the Southern Ocean show dipole-like directivities in case of single shots and tubular cumulative sound exposure level fields beneath the seismic line in case of multiple shots. Compared to a semi-infinite model an incorporation of seafloor reflections enhances the seismically induced noise levels close to the sea surface. Refraction due to sound velocity gradients and sound channelling in near-surface ducts are evident, but affect only low to moderate levels. Hence, exposure zone radii derived for different hearing thresholds are almost independent of the sound velocity structure. With decreasing thresholds radii increase according to a spherical 20 log10 r law in case of single shots and according to a cylindrical 10 log10 r law in case of multiple shots. A doubling of the shot interval diminishes the cumulative sound exposure levels by -3 dB and halves the radii. The ocean bottom properties only slightly affect the radii in shallow waters, if the normal incidence reflection coefficient exceeds 0.2.

  1. A Fracture-Mechanical Model of Crack Growth and Interaction: Application to Pre-eruptive Seismicity

    NASA Astrophysics Data System (ADS)

    Matthews, C.; Sammonds, P.; Kilburn, C.

    2007-12-01

    A greater understanding of the physical processes occurring within a volcano is a key aspect in the success of eruption forecasting. By considering the role of fracture growth, interaction and coalescence in the formation of dykes and conduits as well as the source mechanism for observed seismicity we can create a more general, more applicable model for precursory seismicity. The frequency of volcano-tectonic earthquakes, created by fracturing of volcanic rock, often shows a short-term increase prior to eruption. Using fracture mechanics, the model presented here aims to determine the conditions necessary for the acceleration in fracture events which produces the observed pre-eruptive seismicity. By focusing on the cause of seismic events rather than simply the acceleration patterns observed, the model also highlights the distinction between an accelerating seismic sequence ending with an eruption and a short-term increase which returns to background levels with no activity occurring, an event also observed in the field and an important capability if false alarms are to be avoided. This 1-D model explores the effects of a surrounding stress field and the distribution of multi-scale cracks on the interaction and coalescence of these cracks to form an open pathway for magma ascent. Similarly to seismic observations in the field, and acoustic emissions data from the laboratory, exponential and hyperbolic accelerations in fracturing events are recorded. Crack distribution and inter-crack distance appears to be a significant controlling factor on the evolution of the fracture network, dominating over the effects of a remote stress field. The generality of the model and its basis on fundamental fracture mechanics results makes it applicable to studies of fracture networks in numerous situations. For example looking at the differences between high temperature fracture processes and purely brittle failure the model can be similarly applied to fracture dynamics in the edifice of a long repose volcano and a lava dome.

  2. Numerical assessment of the influence of different joint hysteretic models over the seismic behaviour of Moment Resisting Steel Frames

    NASA Astrophysics Data System (ADS)

    Giordano, V.; Chisari, C.; Rizzano, G.; Latour, M.

    2017-10-01

    The main aim of this work is to understand how the prediction of the seismic performance of moment-resisting (MR) steel frames depends on the modelling of their dissipative zones when the structure geometry (number of stories and bays) and seismic excitation source vary. In particular, a parametric analysis involving 4 frames was carried out, and, for each one, the full-strength beam-to-column connections were modelled according to 4 numerical approaches with different degrees of sophistication (Smooth Hysteretic Model, Bouc-Wen, Hysteretic and simple Elastic-Plastic models). Subsequently, Incremental Dynamic Analyses (IDA) were performed by considering two different earthquakes (Spitak and Kobe). The preliminary results collected so far pointed out that the influence of the joint modelling on the overall frame response is negligible up to interstorey drift ratio values equal to those conservatively assumed by the codes to define conventional collapse (0.03 rad). Conversely, if more realistic ultimate interstorey drift values are considered for the q-factor evaluation, the influence of joint modelling can be significant, and thus may require accurate modelling of its cyclic behavior.

  3. Insight into subdecimeter fracturing processes during hydraulic fracture experiment in Äspö hard rock laboratory, Sweden

    NASA Astrophysics Data System (ADS)

    Kwiatek, Grzegorz; Martínez-Garzón, Patricia; Plenkers, Katrin; Leonhardt, Maria; Zang, Arno; Dresen, Georg; Bohnhoff, Marco

    2017-04-01

    We analyze the nano- and picoseismicity recorded during a hydraulic fracturing in-situ experiment performed in Äspö Hard Rock Laboratory, Sweden. The fracturing experiment included six fracture stages driven by three different water injection schemes (continuous, progressive and pulse pressurization) and was performed inside a 28 m long, horizontal borehole located at 410 m depth. The fracturing process was monitored with two different seismic networks covering a wide frequency band between 0.01 Hz and 100000 Hz and included broadband seismometers, geophones, high-frequency accelerometers and acoustic emission sensors. The combined seismic network allowed for detection and detailed analysis of seismicity with moment magnitudes MW<-4 (source sizes approx. on cm scale) that occurred solely during the hydraulic fracturing and refracturing stages. We relocated the seismicity catalog using the double-difference technique and calculated the source parameters (seismic moment, source size, stress drop, focal mechanism and seismic moment tensors). The physical characteristics of induced seismicity are compared to the stimulation parameters and to the formation parameters of the site. The seismic activity varies significantly depending on stimulation strategy with conventional, continuous stimulation being the most seismogenic. We find a systematic spatio-temporal migration of microseismic events (propagation away and towards wellbore injection interval) and temporal transitions in source mechanisms (opening - shearing - collapse) both being controlled by changes in fluid injection pressure. The derived focal mechanism parameters are in accordance with the local stress field orientation, and signify the reactivation of pre-existing rock flaws. The seismicity follows statistical and source scaling relations observed at different scales elsewhere, however, at an extremely low level of seismic efficiency.

  4. Analysis of induced seismicity in geothermal reservoirs – An overview

    USGS Publications Warehouse

    Zang, Arno; Oye, Volker; Jousset, Philippe; Deichmann, Nicholas; Gritto, Roland; McGarr, Arthur F.; Majer, Ernest; Bruhn, David

    2014-01-01

    In this overview we report results of analysing induced seismicity in geothermal reservoirs in various tectonic settings within the framework of the European Geothermal Engineering Integrating Mitigation of Induced Seismicity in Reservoirs (GEISER) project. In the reconnaissance phase of a field, the subsurface fault mapping, in situ stress and the seismic network are of primary interest in order to help assess the geothermal resource. The hypocentres of the observed seismic events (seismic cloud) are dependent on the design of the installed network, the used velocity model and the applied location technique. During the stimulation phase, the attention is turned to reservoir hydraulics (e.g., fluid pressure, injection volume) and its relation to larger magnitude seismic events, their source characteristics and occurrence in space and time. A change in isotropic components of the full waveform moment tensor is observed for events close to the injection well (tensile character) as compared to events further away from the injection well (shear character). Tensile events coincide with high Gutenberg-Richter b-values and low Brune stress drop values. The stress regime in the reservoir controls the direction of the fracture growth at depth, as indicated by the extent of the seismic cloud detected. Stress magnitudes are important in multiple stimulation of wells, where little or no seismicity is observed until the previous maximum stress level is exceeded (Kaiser Effect). Prior to drilling, obtaining a 3D P-wave (Vp) and S-wave velocity (Vs) model down to reservoir depth is recommended. In the stimulation phase, we recommend to monitor and to locate seismicity with high precision (decametre) in real-time and to perform local 4D tomography for velocity ratio (Vp/Vs). During exploitation, one should use observed and model induced seismicity to forward estimate seismic hazard so that field operators are in a position to adjust well hydraulics (rate and volume of the fluid injected) when induced events start to occur far away from the boundary of the seismic cloud.

  5. Fault-based PSHA of an active tectonic region characterized by low deformation rates: the case of the Lower Rhine Graben

    NASA Astrophysics Data System (ADS)

    Vanneste, Kris; Vleminckx, Bart; Camelbeeck, Thierry

    2016-04-01

    The Lower Rhine Graben (LRG) is one of the few regions in intraplate NW Europe where seismic activity can be linked to active faults, yet probabilistic seismic hazard assessments of this region have hitherto been based on area-source models, in which the LRG is modeled as a single or a small number of seismotectonic zones with uniform seismicity. While fault-based PSHA has become common practice in more active regions of the world (e.g., California, Japan, New Zealand, Italy), knowledge of active faults has been lagging behind in other regions, due to incomplete tectonic inventory, low level of seismicity, lack of systematic fault parameterization, or a combination thereof. The past few years, efforts are increasingly being directed to the inclusion of fault sources in PSHA in these regions as well, in order to predict hazard on a more physically sound basis. In Europe, the EC project SHARE ("Seismic Hazard Harmonization in Europe", http://www.share-eu.org/) represented an important step forward in this regard. In the frame of this project, we previously compiled the first parameterized fault model for the LRG that can be applied in PSHA. We defined 15 fault sources based on major stepovers, bifurcations, gaps, and important changes in strike, dip direction or slip rate. Based on the available data, we were able to place reasonable bounds on the parameters required for time-independent PSHA: length, width, strike, dip, rake, slip rate, and maximum magnitude. With long-term slip rates remaining below 0.1 mm/yr, the LRG can be classified as a low-deformation-rate structure. Information on recurrence interval and elapsed time since the last major earthquake is lacking for most faults, impeding time-dependent PSHA. We consider different models to construct the magnitude-frequency distribution (MFD) of each fault: a slip-rate constrained form of the classical truncated Gutenberg-Richter MFD (Anderson & Luco, 1983) versus a characteristic MFD following Youngs & Coppersmith (1985). The summed Anderson & Luco fault MFDs show a remarkably good agreement with the MFD obtained from the historical and instrumental catalog for the entire LRG, whereas the summed Youngs & Coppersmith MFD clearly underpredicts low to moderate magnitudes, but yields higher occurrence rates for M > 6.3 than would be obtained by simple extrapolation of the catalog MFD. The moment rate implied by the Youngs & Coppersmith MFDs is about three times higher, but is still within the range allowed by current GPS uncertainties. Using the open-source hazard engine OpenQuake (http://openquake.org/), we compute hazard maps for return periods of 475, 2475, and 10,000 yr, and for spectral periods of 0 s (PGA) and 1 s. We explore the impact of various parameter choices, such as MFD model, GMPE distance metric, and inclusion of a background zone to account for lower magnitudes, and we also compare the results with hazard maps based on area-source models. References: Anderson, J. G., and J. E. Luco (1983), Consequences of slip rate constraints on earthquake occurrence relations, Bull. Seismol. Soc. Am., 73(2), 471-496. Youngs, R. R., and K. J. Coppersmith (1985), Implications of fault slip rates and earthquake recurrence models to probabilistic seismic hazard estimates, Bull. Seismol. Soc. Am., 75(4), 939-964.

  6. New Insights on Mt. Etna's Crust and Relationship with the Regional Tectonic Framework from Joint Active and Passive P-Wave Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Díaz-Moreno, A.; Barberi, G.; Cocina, O.; Koulakov, I.; Scarfì, L.; Zuccarello, L.; Prudencio, J.; García-Yeguas, A.; Álvarez, I.; García, L.; Ibáñez, J. M.

    2018-01-01

    In the Central Mediterranean region, the production of chemically diverse volcanic products (e.g., those from Mt. Etna and the Aeolian Islands archipelago) testifies to the complexity of the tectonic and geodynamic setting. Despite the large number of studies that have focused on this area, the relationships among volcanism, tectonics, magma ascent, and geodynamic processes remain poorly understood. We present a tomographic inversion of P-wave velocity using active and passive sources. Seismic signals were recorded using both temporary on-land and ocean bottom seismometers and data from a permanent local seismic network consisting of 267 seismic stations. Active seismic signals were generated using air gun shots mounted on the Spanish Oceanographic Vessel `Sarmiento de Gamboa'. Passive seismic sources were obtained from 452 local earthquakes recorded over a 4-month period. In total, 184,797 active P-phase and 11,802 passive P-phase first arrivals were inverted to provide three different velocity models. Our results include the first crustal seismic active tomography for the northern Sicily area, including the Peloritan-southern Calabria region and both the Mt. Etna and Aeolian volcanic environments. The tomographic images provide a detailed and complete regional seismotectonic framework and highlight a spatially heterogeneous tectonic regime, which is consistent with and extends the findings of previous models. One of our most significant results was a tomographic map extending to 14 km depth showing a discontinuity striking roughly NW-SE, extending from the Gulf of Patti to the Ionian Sea, south-east of Capo Taormina, corresponding to the Aeolian-Tindari-Letojanni fault system, a regional deformation belt. Moreover, for the first time, we observed a high-velocity anomaly located in the south-eastern sector of the Mt. Etna region, offshore of the Timpe area, which is compatible with the plumbing system of an ancient shield volcano located offshore of Mt. Etna.

  7. Investigation of cortical structures at Etna Volcano through the analysis of array and borehole data.

    NASA Astrophysics Data System (ADS)

    Zuccarello, Luciano; Paratore, Mario; La Rocca, Mario; Ferrari, Ferruccio; Messina, Alfio Alex; Galluzzo, Danilo; Contrafatto, Danilo; Rapisarda, Salvatore

    2015-04-01

    A continuous monitoring of seismic activity is a fundamental task to detect the most common signals possibly related with volcanic activity, such as volcano-tectonic earthquakes, long-period events, and volcanic tremor. A reliable prediction of the ray-path propagated back from the recording site to the source is strongly limited by the poor knowledge of the local shallow velocity structure. Usually in volcanic environments the shallowest few hundreds meters of rock are characterized by strongly variable mechanical properties. Therefore the propagation of seismic signals through these shallow layers is strongly affected by lateral heterogeneity, attenuation, scattering, and interaction with the free surface. Driven by these motivations, between May and October 2014 we deployed a seismic array in the area called "Pozzo Pitarrone", where two seismic stations of the local monitoring network are installed, one at surface and one borehole at a depth of about 130 meters. The Pitarrone borehole is located in the middle northeastern flank along one of the main intrusion zones of Etna volcano, the so called NE-rift. With the 3D array we recorded seismic signals coming from the summit craters, and also from the seismogenetic fault called Pernicana Fault, which is located nearby. We used array data to analyse the dispersion characteristics of ambient noise vibrations and we derived one-dimensional (1D) shallow shear-velocity profiles through the inversion of dispersion curves measured by autocorrelation methods (SPAC). We observed a one-dimensional variation of shear-velocity between 430 m/s and 700 m/s to a depth of investigation of about 130 m. An abrupt velocity variation was recorded at a depth of about 60 m, probably corresponding to the transition between two different layers. Our preliminary results suggest a good correlation between the velocity model deducted with the stratigraphic section on Etna. The analysis of the entire data set will improve our knowledge about the (i) structure of the top layer and its relationship with geology, (ii) analysis of the signal to noise ratio (SNR) of volcanic signals as a function of frequency, (iii) study of seismic ray-path deformation caused by the interaction of the seismic waves with the free surface, (iv) evaluation of the attenuation of the seismic signals correlated with the volcanic activity. Moreover the knowledge of a shallow velocity model could improve the study of the source mechanism of low frequency events (VLP, LP and volcanic tremor), and give a new contribution to the seismic monitoring of Etna volcano through the detection and location of seismic sources by using 3D array techniques.

  8. Upper Mantle of the Central Part of the Russian Platform by Receiver Function Data.

    NASA Astrophysics Data System (ADS)

    Goev, Andrey; Kosarev, Grigoriy; Sanina, Irina; Riznichenko, Oksana

    2017-04-01

    The study of the upper mantle of the Russian Platform (RP) with seismic methods remains limited due to the lack of broadband seismic stations. Existing velocity models have been obtained by using the P-wave travel-times from seismic events interpreted as explosions recorded at the NORSAR array in 1974-75 years. Another source of information is deep seismic sounding data from long-range profiles (exceeding 3000 km) such as QUARTZ, RUBIN-1 and GLOBUS and peaceful nuclear explosions (PNE) as sources. However, the data with the maximum distances larger than 1500 km have been acquired on the RP and only in the northern part. Being useful, these velocity models have low spatial resolution. This study analyzes and integrates all the existing RP upper mantle velocity models with the main focus on the central region. We discuss the completeness of the RP area of the LITHO 1.0 model. Based on results of our analysis, we conclude that it is necessary to get up-to-date velocity models of the upper mantle using broadband stations located at the central part of the RP using Vp/Vs ratio data and anisotropy parameters for robust estimation of the mantle boundaries. By applying the joint inversion of receiver-function (RF) data, travel-time residuals and dispersion curves of surface waves we get new models reaching 300 km depth at the locations of broadband seismic stations at the central part of the RP. We used IRIS stations OBN, ARU along with MHV and mobile array NOV. For each station we attempt to determine thickness of the lithosphere and to locate LVL, LAB, Lehman and Hales boundaries as well as the discontinuities in the transition zones at the depth of 410 and 660 km. Also we investigate the necessity of using short-period and broadband RF separately for more robust estimation of the velocity model of the upper mantle. This publication is based on work supported by the Russian Foundation for Basic Research (RFBR), project 15-05-04938 and by the leading scientific school NS-3345.2014.5

  9. Viterbi sparse spike detection and a compositional origin to ultralow-velocity zones

    NASA Astrophysics Data System (ADS)

    Brown, Samuel Paul

    Accurate interpretation of seismic travel times and amplitudes in both the exploration and global scales is complicated by the band-limited nature of seismic data. We present a stochastic method, Viterbi sparse spike detection (VSSD), to reduce a seismic waveform into a most probable constituent spike train. Model waveforms are constructed from a set of candidate spike trains convolved with a source wavelet estimate. For each model waveform, a profile hidden Markov model (HMM) is constructed to represent the waveform as a stochastic generative model with a linear topology corresponding to a sequence of samples. The Viterbi algorithm is employed to simultaneously find the optimal nonlinear alignment between a model waveform and the seismic data, and to assign a score to each candidate spike train. The most probable travel times and amplitudes are inferred from the alignments of the highest scoring models. Our analyses show that the method can resolve closely spaced arrivals below traditional resolution limits and that travel time estimates are robust in the presence of random noise and source wavelet errors. We applied the VSSD method to constrain the elastic properties of a ultralow- velocity zone (ULVZ) at the core-mantle boundary beneath the Coral Sea. We analyzed vertical component short period ScP waveforms for 16 earthquakes occurring in the Tonga-Fiji trench recorded at the Alice Springs Array (ASAR) in central Australia. These waveforms show strong pre and postcursory seismic arrivals consistent with ULVZ layering. We used the VSSD method to measure differential travel-times and amplitudes of the post-cursor arrival ScSP and the precursor arrival SPcP relative to ScP. We compare our measurements to a database of approximately 340,000 synthetic seismograms finding that these data are best fit by a ULVZ model with an S-wave velocity reduction of 24%, a P-wave velocity reduction of 23%, a thickness of 8.5 km, and a density increase of 6%. We simultaneously constrain both P- and S-wave velocity reductions as a 1:1 ratio inside this ULVZ. This 1:1 ratio is not consistent with a partial melt origin to ULVZs. Rather, we demonstrate that a compositional origin is more likely.

  10. 3-component beamforming analysis of ambient seismic noise field for Love and Rayleigh wave source directions

    NASA Astrophysics Data System (ADS)

    Juretzek, Carina; Hadziioannou, Céline

    2014-05-01

    Our knowledge about common and different origins of Love and Rayleigh waves observed in the microseism band of the ambient seismic noise field is still limited, including the understanding of source locations and source mechanisms. Multi-component array methods are suitable to address this issue. In this work we use a 3-component beamforming algorithm to obtain source directions and polarization states of the ambient seismic noise field within the primary and secondary microseism bands recorded at the Gräfenberg array in southern Germany. The method allows to distinguish between different polarized waves present in the seismic noise field and estimates Love and Rayleigh wave source directions and their seasonal variations using one year of array data. We find mainly coinciding directions for the strongest acting sources of both wave types at the primary microseism and different source directions at the secondary microseism.

  11. Sources of seismic events in the cooling lava lake of Kilauea Iki, Hawaii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chouet, B.

    1979-05-10

    Seismic surveys conducted in recent years revealed a suprisingly high and sustained activity of local seismic events originating in the partially frozen lava lake of Kilauea Iki crater, Hawaii. About 8000 events per day were counted in 1976 at the center of the lake with a seismograph having a peak magnification of 280,000 at 60 Hz. The activity was found to be uniform over the whole area above the inferred magma lens and very weak in the periphery of the lake. The frequency-amplitude relation for these shocks obeys the Ishimoto-Iida or Gutenberg-Richter law very well, with a b value ofmore » 1.19( +- 0.06). Locations of a few selected events indicate that they occur both above and below the layer of melt, although the seismic activity appears to be much higher in the upper crust. Whenever clear, the first motion is always outward from the source, suggesting that a crack opening under tensile stress owing to cooling is the responsible source mechanism. A simple model of a circular tensile crack nucleating at a point and growing at subsonic velocity can match the far-field P wave from these sources fairly well. Typical parameters for a large event inferred from the model are the following: radius, 2.7 m; maximum static tensile displacement between crack faces, 2.9..mu..; cavity volume, 4.4 x 10/sup -5/ m/sup 3/; and a seismic moment tensor with diagonal elements only, having the values 3.8 x 10/sup 2/exclamation2, 4.5 x 10/sup 12/, and 3.8 x 10/sup 12/ dyn cm. The magnitude of the event is about -1, and its stress drop is of the order of 0.01 bar. A Q as low as 10 is required to satisfy the shape of the observed wave forms. The total cavity volume integrated over all cracks which is generated daily in the upper crust of Kilauea Iki is of the order of 1--20 m/sup 3/. An alternate interpretation of the data, the seismic activity as reflecting the extension by up to several tens of centimeters of long existing cracks rather than the formation of new cracks.« less

  12. Seismic and Aseismic Slip on the Cascadia Megathrust

    NASA Astrophysics Data System (ADS)

    Michel, S. G. R. M.; Gualandi, A.; Avouac, J. P.

    2017-12-01

    Our understanding of the dynamics governing aseismic and seismic slip hinges on our ability to image the time evolution of fault slip during and in between earthquakes and transients. Such kinematic descriptions are also pivotal to assess seismic hazard as, on the long term, elastic strain accumulating around a fault should be balanced by elastic strain released by seismic slip and aseismic transients. In this presentation, we will discuss how such kinematic descriptions can be obtained from the analysis and modelling of geodetic time series. We will use inversion methods based on Independent Component Analysis (ICA) decomposition of the time series to extract and model the aseismic slip (afterslip and slow slip events). We will show that this approach is very effective to identify, and filter out, non-tectonic sources of geodetic strain such as the strain due to surface loads, which can be estimated using gravimetric measurements from GRACE, and thermal strain. We will discuss in particular the application to the Cascadia subduction zone.

  13. Seismic hazard along a crude oil pipeline in the event of an 1811-1812 type New Madrid earthquake. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, H.H.M.; Chen, C.H.S.

    1990-04-16

    An assessment of the seismic hazard that exists along the major crude oil pipeline running through the New Madrid seismic zone from southeastern Louisiana to Patoka, Illinois is examined in the report. An 1811-1812 type New Madrid earthquake with moment magnitude 8.2 is assumed to occur at three locations where large historical earthquakes have occurred. Six pipeline crossings of the major rivers in West Tennessee are chosen as the sites for hazard evaluation because of the liquefaction potential at these sites. A seismologically-based model is used to predict the bedrock accelerations. Uncertainties in three model parameters, i.e., stress parameter, cutoffmore » frequency, and strong-motion duration are included in the analysis. Each parameter is represented by three typical values. From the combination of these typical values, a total of 27 earthquake time histories can be generated for each selected site due to an 1811-1812 type New Madrid earthquake occurring at a postulated seismic source.« less

  14. Using Multi-scale Dynamic Rupture Models to Improve Ground Motion Estimates: ALCF-2 Early Science Program Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ely, Geoffrey P.

    2013-10-31

    This project uses dynamic rupture simulations to investigate high-frequency seismic energy generation. The relevant phenomena (frictional breakdown, shear heating, effective normal-stress fluctuations, material damage, etc.) controlling rupture are strongly interacting and span many orders of magnitude in spatial scale, requiring highresolution simulations that couple disparate physical processes (e.g., elastodynamics, thermal weakening, pore-fluid transport, and heat conduction). Compounding the computational challenge, we know that natural faults are not planar, but instead have roughness that can be approximated by power laws potentially leading to large, multiscale fluctuations in normal stress. The capacity to perform 3D rupture simulations that couple these processes willmore » provide guidance for constructing appropriate source models for high-frequency ground motion simulations. The improved rupture models from our multi-scale dynamic rupture simulations will be used to conduct physicsbased (3D waveform modeling-based) probabilistic seismic hazard analysis (PSHA) for California. These calculation will provide numerous important seismic hazard results, including a state-wide extended earthquake rupture forecast with rupture variations for all significant events, a synthetic seismogram catalog for thousands of scenario events and more than 5000 physics-based seismic hazard curves for California.« less

  15. Seismic reflection constraints on the glacial dynamics of Johnsons Glacier, Antarctica

    NASA Astrophysics Data System (ADS)

    Benjumea, Beatriz; Teixidó, Teresa

    2001-01-01

    During two Antarctic summers (1996-1997 and 1997-1998), five seismic refraction and two reflection profiles were acquired on the Johnsons Glacier (Livingston Island, Antarctica) in order to obtain information about the structure of the ice, characteristics of the ice-bed contact and basement topography. An innovative technique has been used for the acquisition of reflection data to optimise the field survey schedule. Different shallow seismic sources were used during each field season: Seismic Impulse Source System (SISSY) for the first field survey and low-energy explosives (pyrotechnic noisemakers) during the second one. A comparison between these two shallow seismic sources has been performed, showing that the use of the explosives is a better seismic source in this ice environment. This is one of the first studies where this type of source has been used. The analysis of seismic data corresponding to one of the reflection profiles (L3) allows us to delineate sectors with different glacier structure (accumulation and ablation zones) without using glaciological data. Moreover, vertical discontinuities were detected by the presence of back-scattered energy and the abrupt change in frequency content of first arrivals shown in shot records. After the raw data analysis, standard processing led us to a clear seismic image of the underlying bed topography, which can be correlated with the ice flow velocity anomalies. The information obtained from seismic data on the internal structure of the glacier, location of fracture zones and the topography of the ice-bed interface constrains the glacial dynamics of Johnsons Glacier.

  16. Imaging Seismic Source Variations Using Back-Projection Methods at El Tatio Geyser Field, Northern Chile

    NASA Astrophysics Data System (ADS)

    Kelly, C. L.; Lawrence, J. F.

    2014-12-01

    During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and determine variations in source depth and distribution in the conduit and larger geyser field over many eruption cycles.

  17. Anthropogenic seismicity rates and operational parameters at the Salton Sea Geothermal Field.

    PubMed

    Brodsky, Emily E; Lajoie, Lia J

    2013-08-02

    Geothermal power is a growing energy source; however, efforts to increase production are tempered by concern over induced earthquakes. Although increased seismicity commonly accompanies geothermal production, induced earthquake rate cannot currently be forecast on the basis of fluid injection volumes or any other operational parameters. We show that at the Salton Sea Geothermal Field, the total volume of fluid extracted or injected tracks the long-term evolution of seismicity. After correcting for the aftershock rate, the net fluid volume (extracted-injected) provides the best correlation with seismicity in recent years. We model the background earthquake rate with a linear combination of injection and net production rates that allows us to track the secular development of the field as the number of earthquakes per fluid volume injected decreases over time.

  18. Characteristics of broadband slow earthquakes explained by a Brownian model

    NASA Astrophysics Data System (ADS)

    Ide, S.; Takeo, A.

    2017-12-01

    Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic, rather than random (Rubin and Armbruster, 2013; Bostock et al., 2015). Since even spatially characteristic source must be activated randomly in time, moment release from these sources are compatible to the fluctuation in BSE model. Therefore, BSE model contains the model of Gomberg et al. (2016), which suggests that the cluster of LFEs makes VLF signals, as a special case.

  19. Time dependent data, time independent models: challenges of updating Australia's National Seismic Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Griffin, J.; Clark, D.; Allen, T.; Ghasemi, H.; Leonard, M.

    2017-12-01

    Standard probabilistic seismic hazard assessment (PSHA) simulates earthquake occurrence as a time-independent process. However paleoseismic studies in slowly deforming regions such as Australia show compelling evidence that large earthquakes on individual faults cluster within active periods, followed by long periods of quiescence. Therefore the instrumental earthquake catalog, which forms the basis of PSHA earthquake recurrence calculations, may only capture the state of the system over the period of the catalog. Together this means that data informing our PSHA may not be truly time-independent. This poses challenges in developing PSHAs for typical design probabilities (such as 10% in 50 years probability of exceedance): Is the present state observed through the instrumental catalog useful for estimating the next 50 years of earthquake hazard? Can paleo-earthquake data, that shows variations in earthquake frequency over time-scales of 10,000s of years or more, be robustly included in such PSHA models? Can a single PSHA logic tree be useful over a range of different probabilities of exceedance? In developing an updated PSHA for Australia, decadal-scale data based on instrumental earthquake catalogs (i.e. alternative area based source models and smoothed seismicity models) is integrated with paleo-earthquake data through inclusion of a fault source model. Use of time-dependent non-homogeneous Poisson models allows earthquake clustering to be modeled on fault sources with sufficient paleo-earthquake data. This study assesses the performance of alternative models by extracting decade-long segments of the instrumental catalog, developing earthquake probability models based on the remaining catalog, and testing performance against the extracted component of the catalog. Although this provides insights into model performance over the short-term, for longer timescales it is recognised that model choice is subject to considerable epistemic uncertainty. Therefore a formal expert elicitation process has been used to assign weights to alternative models for the 2018 update to Australia's national PSHA.

  20. Probabilistic Seismic Hazard Assessment for a NPP in the Upper Rhine Graben, France

    NASA Astrophysics Data System (ADS)

    Clément, Christophe; Chartier, Thomas; Jomard, Hervé; Baize, Stéphane; Scotti, Oona; Cushing, Edward

    2015-04-01

    The southern part of the Upper Rhine Graben (URG) straddling the border between eastern France and western Germany, presents a relatively important seismic activity for an intraplate area. A magnitude 5 or greater shakes the URG every 25 years and in 1356 a magnitude greater than 6.5 struck the city of Basel. Several potentially active faults have been identified in the area and documented in the French Active Fault Database (web site in construction). These faults are located along the Graben boundaries and also inside the Graben itself, beneath heavily populated areas and critical facilities (including the Fessenheim Nuclear Power Plant). These faults are prone to produce earthquakes with magnitude 6 and above. Published regional models and preliminary geomorphological investigations provided provisional assessment of slip rates for the individual faults (0.1-0.001 mm/a) resulting in recurrence time of 10 000 years or greater for magnitude 6+ earthquakes. Using a fault model, ground motion response spectra are calculated for annual frequencies of exceedance (AFE) ranging from 10-4 to 10-8 per year, typical for design basis and probabilistic safety analyses of NPPs. A logic tree is implemented to evaluate uncertainties in seismic hazard assessment. The choice of ground motion prediction equations (GMPEs) and range of slip rate uncertainty are the main sources of seismic hazard variability at the NPP site. In fact, the hazard for AFE lower than 10-4 is mostly controlled by the potentially active nearby Rhine River fault. Compared with areal source zone models, a fault model localizes the hazard around the active faults and changes the shape of the Uniform Hazard Spectrum at the site. Seismic hazard deaggregations are performed to identify the earthquake scenarios (including magnitude, distance and the number of standard deviations from the median ground motion as predicted by GMPEs) that contribute to the exceedance of spectral acceleration for the different AFE levels. These scenarios are finally examined with respect to the seismicity data available in paleoseismic, historic and instrumental catalogues.

  1. Evaluation of deep moonquake source parameters: Implication for fault characteristics and thermal state

    NASA Astrophysics Data System (ADS)

    Kawamura, Taichi; Lognonné, Philippe; Nishikawa, Yasuhiro; Tanaka, Satoshi

    2017-07-01

    While deep moonquakes are seismic events commonly observed on the Moon, their source mechanism is still unexplained. The two main issues are poorly constrained source parameters and incompatibilities between the thermal profiles suggested by many studies and the apparent need for brittle properties at these depths. In this study, we reinvestigated the deep moonquake data to reestimate its source parameters and uncover the characteristics of deep moonquake faults that differ from those on Earth. We first improve the estimation of source parameters through spectral analysis using "new" broadband seismic records made by combining those of the Apollo long- and short-period seismometers. We use the broader frequency band of the combined spectra to estimate corner frequencies and DC values of spectra, which are important parameters to constrain the source parameters. We further use the spectral features to estimate seismic moments and stress drops for more than 100 deep moonquake events from three different source regions. This study revealed that deep moonquake faults are extremely smooth compared to terrestrial faults. Second, we reevaluate the brittle-ductile transition temperature that is consistent with the obtained source parameters. We show that the source parameters imply that the tidal stress is the main source of the stress glut causing deep moonquakes and the large strain rate from tides makes the brittle-ductile transition temperature higher. Higher transition temperatures open a new possibility to construct a thermal model that is consistent with deep moonquake occurrence and pressure condition and thereby improve our understandings of the deep moonquake source mechanism.

  2. Rapid determination of the energy magnitude Me

    NASA Astrophysics Data System (ADS)

    di Giacomo, D.; Parolai, S.; Bormann, P.; Saul, J.; Grosser, H.; Wang, R.; Zschau, J.

    2009-04-01

    The magnitude of an earthquake is one of the most used parameters to evaluate the earthquake's damage potential. However, many magnitude scales developed over the past years have different meanings. Among the non-saturating magnitude scales, the energy magnitude Me is related to a well defined physical parameter of the seismic source, that is the radiated seismic energy ES (e.g. Bormann et al., 2002): Me = 2/3(log10 ES - 4.4). Me is more suitable than the moment magnitude Mw in describing an earthquake's shaking potential (Choy and Kirby, 2004). Indeed, Me is calculated over a wide frequency range of the source spectrum and represents a better measure of the shaking potential, whereas Mw is related to the low-frequency asymptote of the source spectrum and is a good measure of the fault size and hence of the static (tectonic) effect of an earthquake. The calculation of ES requires the integration over frequency of the squared P-waves velocity spectrum corrected for the energy loss experienced by the seismic waves along the path from the source to the receivers. To accout for the frequency-dependent energy loss, we computed spectral amplitude decay functions for different frequenciesby using synthetic Green's functions (Wang, 1999) based on the reference Earth model AK135Q (Kennett et al., 1995; Montagner and Kennett, 1996). By means of these functions the correction for the various propagation effects of the recorded P-wave velocity spectra is performed in a rapid and robust way, and the calculation of ES, and hence of Me, can be computed at the single station. We analyse teleseismic broadband P-waves signals in the distance range 20°-98°. We show that our procedure is suitable for implementation in rapid response systems since it could provide stable Me determinations within 10-15 minutes after the earthquake's origin time. Indeed, we use time variable cumulative energy windows starting 4 s after the first P-wave arrival in order to include the earthquake rupture duration, which is calculated according to Bormann and Saul (2008). We tested our procedure for a large dataset composed by about 750 earthquakes globally distributed in the Mw range 5.5-9.3 recorded at the broadband stations managed by the IRIS, GEOFON, and GEOSCOPE global networks, as well as other regional seismic networks. Me and Mw express two different aspects of the seismic source, and a combined use of these two magnitude scales would allow a better assessment of the tsunami and shaking potential of an earthquake.. References Bormann, P., Baumbach, M., Bock, G., Grosser, H., Choy, G. L., and Boatwright, J. (2002). Seismic sources and source parameters, in IASPEI New Manual of Seismological Observatory Practice, P. Bormann (Editor), Vol. 1, GeoForschungsZentrum, Potsdam, Chapter 3, 1-94. Bormann, P., and Saul, J. (2008). The new IASPEI standard broadband magnitude mB. Seism. Res. Lett., 79(5), 699-705. Choy, G. L., and Kirby, S. (2004). Apparent stress, fault maturity and seismic hazard for normal-fault earthquakes at subduction zones. Geophys. J. Int., 159, 991-1012. Kennett, B. L. N., Engdahl, E. R., and Buland, R. (1995). Constraints on seismic velocities in the Earth from traveltimes. Geophys. J. Int., 122, 108-124. Montagner, J.-P., and Kennett, B. L. N. (1996). How to reconcile body-wave and normal-mode reference Earth models?. Geophys. J. Int., 125, 229-248. Wang, R. (1999). A simple orthonormalization method for stable and efficient computation of Green's functions. Bull. Seism. Soc. Am., 89(3), 733-741.

  3. Preliminary report on the Black Thunder, Wyoming CTBT R and D experiment quicklook report: LLNL input from regional stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harben, P.E.; Glenn, L.A.

    This report presents a preliminary summary of the data recorded at three regional seismic stations from surface blasting at the Black Thunder Coal Mine in northeast Wyoming. The regional stations are part of a larger effort that includes many more seismic stations in the immediate vicinity of the mine. The overall purpose of this effort is to characterize the source function and propagation characteristics of large typical surface mine blasts. A detailed study of source and propagation features of conventional surface blasts is a prerequisite to attempts at discriminating this type of blasting activity from other sources of seismic events.more » The Black Thunder Seismic experiment is a joint verification effort to determine seismic source and path effects that result from very large, but routine ripple-fired surface mining blasts. Studies of the data collected will be for the purpose of understanding how the near-field and regional seismic waveforms from these surface mining blasts are similar to, and different from, point shot explosions and explosions at greater depth. The Black Hills Station is a Designated Seismic Station that was constructed for temporary occupancy by the Former Soviet Union seismic verification scientists in accordance with the Threshold Test Ban Treaty protocol.« less

  4. An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution

    NASA Astrophysics Data System (ADS)

    Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan

    2013-04-01

    The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).

  5. Neo-deterministic seismic hazard scenarios for India—a preventive tool for disaster mitigation

    NASA Astrophysics Data System (ADS)

    Parvez, Imtiyaz A.; Magrin, Andrea; Vaccari, Franco; Ashish; Mir, Ramees R.; Peresan, Antonella; Panza, Giuliano Francesco

    2017-11-01

    Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489-508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93-165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489-508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.

  6. Attenuation Model Using the Large-N Array from the Source Physics Experiment

    NASA Astrophysics Data System (ADS)

    Atterholt, J.; Chen, T.; Snelson, C. M.; Mellors, R. J.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of chemical explosions at the Nevada National Security Site. SPE seeks to better characterize the influence of subsurface heterogeneities on seismic wave propagation and energy dissipation from explosions. As a part of this experiment, SPE-5, a 5000 kg TNT equivalent chemical explosion, was detonated in 2016. During the SPE-5 experiment, a Large-N array of 996 geophones (half 3-component and half z-component) was deployed. This array covered an area that includes loosely consolidated alluvium (weak rock) and weathered granite (hard rock), and recorded the SPE-5 explosion as well as 53 weight drops. We use these Large-N recordings to develop an attenuation model of the area to better characterize how geologic structures influence source energy partitioning. We found a clear variation in seismic attenuation for different rock types: high attenuation (low Q) for alluvium and low attenuation (high Q) for granite. The attenuation structure correlates well with local geology, and will be incorporated into the large simulation effort of the SPE program to validate predictive models. (LA-UR-17-26382)

  7. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise mapping application is composed of four principal modules: (1) pre-processing of raw data, (2) massive cross-correlation, (3) post-processing of correlation data based on computation of logarithmic energy ratio and (4) generation of source maps from post-processed data. Implementation of the solution posed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service oriented architecture for coordination of various sub-systems, and engineering an appropriate data storage solution. The present pilot version of the service implements noise source maps for Switzerland. Extension of the solution to Central Europe is planned for the next project phase.

  8. Fault2SHA- A European Working group to link faults and Probabilistic Seismic Hazard Assessment communities in Europe

    NASA Astrophysics Data System (ADS)

    Scotti, Oona; Peruzza, Laura

    2016-04-01

    The key questions we ask are: What is the best strategy to fill in the gap in knowledge and know-how in Europe when considering faults in seismic hazard assessments? Are field geologists providing the relevant information for seismic hazard assessment? Are seismic hazard analysts interpreting field data appropriately? Is the full range of uncertainties associated with the characterization of faults correctly understood and propagated in the computations? How can fault-modellers contribute to a better representation of the long-term behaviour of fault-networks in seismic hazard studies? Providing answers to these questions is fundamental, in order to reduce the consequences of future earthquakes and improve the reliability of seismic hazard assessments. An informal working group was thus created at a meeting in Paris in November 2014, partly financed by the Institute of Radioprotection and Nuclear Safety, with the aim to motivate exchanges between field geologists, fault modellers and seismic hazard practitioners. A variety of approaches were presented at the meeting and a clear gap emerged between some field geologists, that are not necessarily familiar with probabilistic seismic hazard assessment methods and needs and practitioners that do not necessarily propagate the "full" uncertainty associated with the characterization of faults. The group thus decided to meet again a year later in Chieti (Italy), to share concepts and ideas through a specific exercise on a test case study. Some solutions emerged but many problems of seismic source characterizations with people working in the field as well as with people tackling models of interacting faults remained. Now, in Wien, we want to open the group and launch a call for the European community at large to contribute to the discussion. The 2016 EGU session Fault2SHA is motivated by such an urgency to increase the number of round tables on this topic and debate on the peculiarities of using faults in seismic hazard assessment in Europe. Europe is a country dominated by slow deforming regions where the long histories of seismicity are the main source of information to infer fault behaviour. Geodetic studies, geomorphological studies as well as paleoseismological studies are welcome complementary data that are slowly filling in the database but are at present insufficient, by themselves, to allow characterizing faults. Moreover, Europe is characterized by complex fault systems (Upper Rhine Graben, Central and Southern Apennines, Corinth, etc.) and the degree of uncertainty in the characterization of the faults can be very different from one country to the other. This requires developing approaches and concepts that are adapted to the European context. It is thus the specificity of the European situation that motivates the creation of a predominantly European group where field geologists, fault modellers and fault-PSHA practitioners may exchange and learn from each other's experience.

  9. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Juerg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2010-05-01

    In recent years, seismic interferometry (or Green's function retrieval) has led to many applications in seismology (exploration, regional and global), underwater acoustics and ultrasonics. One of the explanations for this broad interest lies in the simplicity of the methodology. In passive data applications a simple crosscorrelation of responses at two receivers gives the impulse response (Green's function) at one receiver as if there were a source at the position of the other. In controlled-source applications the procedure is similar, except that it involves in addition a summation along the sources. It has also been recognized that the simple crosscorrelation approach has its limitations. From the various theoretical models it follows that there are a number of underlying assumptions for retrieving the Green's function by crosscorrelation. The most important assumptions are that the medium is lossless and that the waves are equipartitioned. In heuristic terms the latter condition means that the receivers are illuminated isotropically from all directions, which is for example achieved when the sources are regularly distributed along a closed surface, the sources are mutually uncorrelated and their power spectra are identical. Despite the fact that in practical situations these conditions are at most only partly fulfilled, the results of seismic interferometry are generally quite robust, but the retrieved amplitudes are unreliable and the results are often blurred by artifacts. Several researchers have proposed to address some of the shortcomings by replacing the correlation process by deconvolution. In most cases the employed deconvolution procedure is essentially 1-D (i.e., trace-by-trace deconvolution). This compensates the anelastic losses, but it does not account for the anisotropic illumination of the receivers. To obtain more accurate results, seismic interferometry by deconvolution should acknowledge the 3-D nature of the seismic wave field. Hence, from a theoretical point of view, the trace-by-trace process should be replaced by a full 3-D wave field deconvolution process. Interferometry by multidimensional deconvolution is more accurate than the trace-by-trace correlation and deconvolution approaches but the processing is more involved. In the presentation we will give a systematic analysis of seismic interferometry by crosscorrelation versus multi-dimensional deconvolution and discuss applications of both approaches.

  10. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  11. Quantitative investigations of the Missouri gravity low: A possible expression of a large, Late Precambrian batholith intersecting the New Madrid seismic zone

    USGS Publications Warehouse

    Hildenbrand, T.G.; Griscom, A.; Van Schmus, W. R.; Stuart, W.D.

    1996-01-01

    Analysis of gravity and magnetic anomaly data helps characterize the geometry and physical properties of the source of the Missouri gravity low, an important cratonic feature of substantial width (about 125 km) and length (> 600 km). Filtered anomaly maps show that this prominent feature extends NW from the Reelfoot rift to the Midcontinent Rift System. Geologic reasoning and the simultaneous inversion of the gravity and magnetic data lead to an interpretation that the gravity anomaly reflects an upper crustal, 11-km-thick batholith with either near vertical or outward dipping boundaries. Considering the modeled characteristics of the batholith, structural fabric of Missouri, and relations of the batholith with plutons and regions of alteration, a tectonic model for the formation of the batholith is proposed. The model includes a mantle plume that heated the crust during Late Precambrian and melted portions of lower and middle crust, from which the low-density granitic rocks forming the batholith were partly derived. The batholith, called the Missouri batholith, may be currently related to the release of seismic energy in the New Madrid seismic zone (earthquake concentrations occur at the intersection of the Missouri batholith and the New Madrid seismic zone). Three qualitative mechanical models are suggested to explain this relationship with seismicity. Copyright 1996 by the American Geophysical Union.

  12. Model space exploration for determining landslide source history from long period seismic data

    NASA Astrophysics Data System (ADS)

    Zhao, Juan; Mangeney, Anne; Stutzmann, Eléonore; Capdeville, Yann; Moretti, Laurent; Calder, Eliza S.; Smith, Patrick J.; Cole, Paul; Le Friant, Anne

    2013-04-01

    The seismic signals generated by high magnitude landslide events can be recorded at remote stations, which provides access to the landslide process. During the "Boxing Day" eruption at Montserrat in 1997, the long period seismic signals generated by the debris avalanche are recorded by two stations at distances of 450 km and 1261 km. We investigate the landslide process considering that the landslide source can be described by single forces. The period band 25-50 sec is selected for which the landslide signal is clearly visible at the two stations. We first use the transverse component of the closest station to determine the horizontal forces. We model the seismogram by normal mode summation and investigate the model space. Two horizontal forces are found that best fit the data. These two horizontal forces have similar amplitude, but opposite direction and they are separated in time by 70 sec. The radiation pattern of the transverse component does not enable to determine the exact azimuth of these forces. We then model the vertical component of the seismograms which enable to retrieve both the vertical and horizontal forces. Using the parameter previously determined (amplitude ratio and time shift of the 2 horizontal forces), we further investigate the model space and show that a single vertical force together with the 2 horizontal forces enable to fit the data. The complete source time function can be described as follows: a horizontal force toward the opposite direction of the landslide flow is followed 40 sec later by a vertical downward force and 30 more seconds later by a horizontal force toward the direction of the flow. Inverting directly the seismograms in the period band 25-50sec enable to retrieve a source time function that is consistent with the 3 forces determined previously. The source time function in this narrow period band alone does not enable easily to recover the corresponding single forces. This method can be used to determine the source parameters using only 2 distant stations. It is successfully tested also on Mount St. Helens (1980) event which are recorded by more broadband stations.

  13. Separation of simultaneous sources using a structural-oriented median filter in the flattened dimension

    NASA Astrophysics Data System (ADS)

    Gan, Shuwei; Wang, Shoudong; Chen, Yangkang; Chen, Xiaohong; Xiang, Kui

    2016-01-01

    Simultaneous-source shooting can help tremendously shorten the acquisition period and improve the quality of seismic data for better subsalt seismic imaging, but at the expense of introducing strong interference (blending noise) to the acquired seismic data. We propose to use a structural-oriented median filter to attenuate the blending noise along the structural direction of seismic profiles. The principle of the proposed approach is to first flatten the seismic record in local spatial windows and then to apply a traditional median filter (MF) to the third flattened dimension. The key component of the proposed approach is the estimation of the local slope, which can be calculated by first scanning the NMO velocity and then transferring the velocity to the local slope. Both synthetic and field data examples show that the proposed approach can successfully separate the simultaneous-source data into individual sources. We provide an open-source toy example to better demonstratethe proposed methodology.

  14. Microearthquake mechanism from wave amplitudes recorded by a close-to-surface seismic array at Ocnele Mari, Romania

    NASA Astrophysics Data System (ADS)

    Jechumtálová, Z.; Šílený, J.; Trifu, C.-I.

    2014-06-01

    The resolution of event mechanism is investigated in terms of the unconstrained moment tensor (MT) source model and the shear-tensile crack (STC) source model representing a slip along the fault with an off-plane component. Data are simulated as recorded by the actual seismic array installed at Ocnele Mari (Romania), where sensors are placed in shallow boreholes. Noise is included as superimposed on synthetic data, and the analysis explores how the results are influenced (i) by data recorded by the complete seismic array compared to that provided by the subarray of surface sensors, (ii) by using three- or one-component sensors and (iii) by inverting P- and S-wave amplitudes versus P-wave amplitudes only. The orientation of the pure shear fracture component is resolved almost always well. On the other hand, the noise increase distorts the non-double-couple components (non-DC) of the MT unless a high-quality data set is available. The STC source model yields considerably less spurious non-shear fracture components. Incorporating recordings at deeper sensors in addition to those obtained from the surface ones allows for the processing of noisier data. Performance of the network equipped with three-component sensors is only slightly better than that with uniaxial sensors. Inverting both P- and S-wave amplitudes compared to the inversion of P-wave amplitudes only markedly improves the resolution of the orientation of the source mechanism. Comparison of the inversion results for the two alternative source models permits the assessment of the reliability of non-shear components retrieved. As example, the approach is investigated on three microseismic events occurred at Ocnele Mari, where both large and small non-DC components were found. The analysis confirms a tensile fracturing for two of these events, and a shear slip for the third.

  15. GPS-derived Coseismic deformations of the 2016 Aktao Ms6.7 earthquake and source modelling

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhao, B.; Xiaoqiang, W.; Daiqing, L.; Yushan, A.

    2017-12-01

    On 25th November 2016, a Ms6.7 earthquake occurred on Aktao, a county of Xinjiang, China. This earthquake was the largest earthquake occurred in the northeastern margin of the Pamir Plateau in the last 30 years. By GPS observation, we get the coseismic displacement of this earthquake. The maximum displacement site is located in the Muji Basin, 15km from south of the causative fault. The maximum deformation is down to 0.12m, and 0.10m for coseismic displacement, our results indicate that the earthquake has the characteristics of dextral strike-slip and normal-fault rupture. Based on the GPS results, we inverse the rupture distribution of the earthquake. The source model is consisted of two approximate independent zones with a depth of less than 20km, the maximum displacement of one zone is 0.6m, the other is 0.4m. The total seismic moment is Mw6.6.1 which is calculated by the geodetic inversion. The source model of GPS-derived is basically consistent with that of seismic waveform inversion, and is consistent with the surface rupture distribution obtained from field investigation. According to our inversion calculation, the recurrence period of strong earthquakes similar to this earthquake should be 30 60 years, and the seismic risk of the eastern segment of Muji fault is worthy of attention. This research is financially supported by National Natural Science Foundation of China (Grant No.41374030)

  16. Origin of the pulse-like signature of shallow long-period volcano seismicity

    USGS Publications Warehouse

    Chouet, Bernard A.; Dawson, Phillip B.

    2016-01-01

    Short-duration, pulse-like long-period (LP) events are a characteristic type of seismicity accompanying eruptive activity at Mount Etna in Italy in 2004 and 2008 and at Turrialba Volcano in Costa Rica and Ubinas Volcano in Peru in 2009. We use the discrete wave number method to compute the free surface response in the near field of a rectangular tensile crack embedded in a homogeneous elastic half space and to gain insights into the origin of the LP pulses. Two source models are considered, including (1) a vertical fluid-driven crack and (2) a unilateral tensile rupture growing at a fixed sub-Rayleigh velocity with constant opening on a vertical crack. We apply cross correlation to the synthetics and data to demonstrate that a fluid-driven crack provides a natural explanation for these data with realistic source sizes and fluid properties. Our modeling points to shallow sources (<1 km depth), whose signatures are representative of the Rayleigh pulse sampled at epicentral distances >∼1 km. While a slow-rupture failure provides another potential model for these events, the synthetics and resulting fits to the data are not optimal in this model compared to a fluid-driven source. We infer that pulse-like LP signatures are parts of the continuum of responses produced by shallow fluid-driven sources in volcanoes.

  17. Sensitivities of Near-field Tsunami Forecasts to Megathrust Deformation Predictions

    NASA Astrophysics Data System (ADS)

    Tung, S.; Masterlark, T.

    2018-02-01

    This study reveals how modeling configurations of forward and inverse analyses of coseismic deformation data influence the estimations of seismic and tsunami sources. We illuminate how the predictions of near-field tsunami change when (1) a heterogeneous (HET) distribution of crustal material is introduced to the elastic dislocation model, and (2) the near-trench rupture is either encouraged or suppressed to invert spontaneous coseismic displacements. Hypothetical scenarios of megathrust earthquakes are studied with synthetic Global Positioning System displacements in Cascadia. Finite-element models are designed to mimic the subsurface heterogeneity across the curved subduction margin. The HET lithospheric domain modifies the seafloor displacement field and alters tsunami predictions from those of a homogeneous (HOM) crust. Uncertainties persist as the inverse analyses of geodetic data produce nonrealistic slip artifacts over the HOM domain, which propagates into the prediction errors of subsequent tsunami arrival and amplitudes. A stochastic analysis further shows that the uncertainties of seismic tomography models do not degrade the solution accuracy of HET over HOM. Whether the source ruptures near the trench also controls the details of the seafloor disturbance. Deeper subsurface slips induce more seafloor uplift near the coast and cause an earlier arrival of tsunami waves than surface-slipping events. We suggest using the solutions of zero-updip-slip and zero-updip-slip-gradient rupture boundary conditions as end-members to constrain the tsunami behavior for forecasting purposes. The findings are important for the near-field tsunami warning that primarily relies on the near-real-time geodetic or seismic data for source calibration before megawaves hit the nearest shore upon tsunamigenic events.

  18. Report of the Task Group on Independent Research and Development

    DTIC Science & Technology

    1967-02-01

    in 1959 when the technology used in prospecting for oil by seismic means was employed to detect and sug- gest the source of earth shocks generated by...result of TI’ s work in seismology for oil exploration. The use of seismometers for intrusion detection stemmed from the large, unde- sirable signals...produced by any human movement during oil -field seismic tests. The first military contract for six test models of these devices was received in 1963

  19. Fault Specific Seismic Hazard Maps as Input to Loss Reserves Calculation for Attica Buildings

    NASA Astrophysics Data System (ADS)

    Deligiannakis, Georgios; Papanikolaou, Ioannis; Zimbidis, Alexandros; Roberts, Gerald

    2014-05-01

    Greece is prone to various natural disasters, such as wildfires, floods, landslides and earthquakes, due to the special environmental and geological conditions dominating in tectonic plate boundaries. Seismic is the predominant risk, in terms of damages and casualties in the Greek territory. The historical record of earthquakes in Greece has been published from various researchers, providing useful data in seismic hazard assessment of Greece. However, the completeness of the historical record in Greece, despite being one of the longest worldwide, reaches only 500 years for M ≥ 7.3 and less than 200 years for M ≥ 6.5. Considering that active faults in the area have recurrence intervals of a few hundred to several thousands of years, it is clear that many active faults have not been activated during the completeness period covered by the historical records. New Seismic Hazard Assessment methodologies tend to follow fault specific approaches where seismic sources are geologically constrained active faults, in order to address problems related to the historical records incompleteness, obtain higher spatial resolution and calculate realistic source locality distances, since seismic sources are very accurately located. Fault specific approaches provide quantitative assessments as they measure fault slip rates from geological data, providing a more reliable estimate of seismic hazard. We used a fault specific seismic hazard assessment approach for the region of Attica. The method of seismic hazard mapping from geological fault throw-rate data combined three major factors: Empirical data which combine fault rupture lengths, earthquake magnitudes and coseismic slip relationships. The radiuses of VI, VII, VIII and IX isoseismals on the Modified Mercalli (MM) intensity scale. Attenuation - amplification functions for seismic shaking on bedrock compared to basin filling sediments. We explicitly modeled 22 active faults that could affect the region of Attica, including Athens, using detailed data derived from published papers, neotectonic maps and fieldwork observations. Moreover, we incorporated background seismicity models from the historic record and also the subduction zone earthquakes distribution, for the integration of strong deep earthquakes that could also affect Attica region. We created 4 high spatial resolution seismic hazard maps for the region of Attica, one for each of the intensities VII - X (MM). These maps offer a locality specific shaking recurrence record, which represents the long-term shaking record in a more complete way, since they incorporate several seismic cycles of the active faults that could affect Attica. Each one of these high resolution seismic hazard maps displays both the spatial distribution and the recurrence, over a specific time period, of the relevant intensity. Time - independent probabilities were extracted based on these average recurrence intervals, using the stationary Poisson model P = 1 -e-Λt. The 'Λ' value was provided by the intensities recurrence, as displayed in the seismic hazard maps. However, the insurance contracts usually lack of detailed spatial information and they refer to Postal Codes level, akin to CRESTA zones. To this end, a time-independent probability of shaking at intensities VII - X was calculated for every Postal Code, for a given time period, using the Poisson model. The reserves calculation on buildings portfolio combines the probability of events of specific intensities within the Postal Codes, with the buildings characteristics, such as the building construction type and the insured value. We propose a standard approach for the reserves calculation K(t) for a specific time period: K (t) = x2 ·[x1 ·y1 ·P1(t) + x1 ·y2 ·P2(t) + x1 ·y3 ·P3(t) + x1 ·y4 ·P4(t)] x1 which is a function of the probabilities of occurrence for the seismic intensities VII - X (P1(t) -P4(t)) for the same period, the value of the building x1, the insured value x2 and the characteristics of the building, such as the construction type, age, height and use of property (y1 - y4). Furthermore a stochastic approach is also adopted in order to obtain the relevant reserve value K(t) for the specific time period. This calculation considers a set of simulations from the Poisson random variable and then taking the respective expectations.

  20. Locating scatterers while drilling using seismic noise due to tunnel boring machine

    NASA Astrophysics Data System (ADS)

    Harmankaya, U.; Kaslilar, A.; Wapenaar, K.; Draganov, D.

    2018-05-01

    Unexpected geological structures can cause safety and economic risks during underground excavation. Therefore, predicting possible geological threats while drilling a tunnel is important for operational safety and for preventing expensive standstills. Subsurface information for tunneling is provided by exploratory wells and by surface geological and geophysical investigations, which are limited by location and resolution, respectively. For detailed information about the structures ahead of the tunnel face, geophysical methods are applied during the tunnel-drilling activity. We present a method inspired by seismic interferometry and ambient-noise correlation that can be used for detecting scatterers, such as boulders and cavities, ahead of a tunnel while drilling. A similar method has been proposed for active-source seismic data and validated using laboratory and field data. Here, we propose to utilize the seismic noise generated by a Tunnel Boring Machine (TBM), and recorded at the surface. We explain our method at the hand of data from finite-difference modelling of noise-source wave propagation in a medium where scatterers are present. Using the modelled noise records, we apply cross-correlation to obtain correlation gathers. After isolating the scattered arrivals in these gathers, we cross-correlate again and invert for the correlated traveltime to locate scatterers. We show the potential of the method for locating the scatterers while drilling using noise records due to TBM.

Top