Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.; Kubo, H.
2013-12-01
It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.
NASA Astrophysics Data System (ADS)
Ide, Satoshi; Maury, Julie
2018-04-01
Tectonic tremors, low-frequency earthquakes, very low-frequency earthquakes, and slow slip events are all regarded as components of broadband slow earthquakes, which can be modeled as a stochastic process using Brownian motion. Here we show that the Brownian slow earthquake model provides theoretical relationships among the seismic moment, seismic energy, and source duration of slow earthquakes and that this model explains various estimates of these quantities in three major subduction zones: Japan, Cascadia, and Mexico. While the estimates for these three regions are similar at the seismological frequencies, the seismic moment rates are significantly different in the geodetic observation. This difference is ascribed to the difference in the characteristic times of the Brownian slow earthquake model, which is controlled by the width of the source area. We also show that the model can include non-Gaussian fluctuations, which better explains recent findings of a near-constant source duration for low-frequency earthquake families.
NASA Astrophysics Data System (ADS)
Mai, P. M.; Schorlemmer, D.; Page, M.
2012-04-01
Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.
Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.
2000-01-01
We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the standard CDMG-USGS model by less than 10% across most of California but is higher (generally about 10% to 30%) within 20 km from some faults.
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes
2017-04-01
In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite rupture models. 1d-layered medium models are implemented for both near- and far-field data predictions. A highlight of our approach is a weak dependence on earthquake bulletin information: hypocenter locations and source origin times are relatively free source model parameters. We present this harmonized source modelling environment based on example earthquake studies, e.g. the 2010 Haiti earthquake, the 2009 L'Aquila earthquake and others. We discuss the benefit of combined-data non-linear modelling on the resolution of first-order rupture parameters, e.g. location, size, orientation, mechanism, moment/slip and rupture propagation. The presented studies apply our newly developed software tools which build up on the open-source seismological software toolbox pyrocko (www.pyrocko.org) in the form of modules. We aim to facilitate a better exploitation of open global data sets for a wide community studying tectonics, but the tools are applicable also for a large range of regional to local earthquake studies. Our developments therefore ensure a large flexibility in the parametrization of medium models (e.g. 1d to 3d medium models), source models (e.g. explosion sources, full moment tensor sources, heterogeneous slip models, etc) and of the predicted data (e.g. (high-rate) GPS, strong motion, tilt). This work is conducted within the project "Bridging Geodesy and Seismology" (www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.
Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty
Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon
2006-01-01
Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions adopted in the loss calculations. This is a sensitivity study aimed at future regional earthquake source modelers, so that they may be informed of the effects on loss introduced by modeling assumptions and epistemic uncertainty in the WG02 earthquake source model.
Earthquake Source Inversion Blindtest: Initial Results and Further Developments
NASA Astrophysics Data System (ADS)
Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.
2007-12-01
Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.
The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook
NASA Astrophysics Data System (ADS)
Mai, P. M.
2017-12-01
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.
Updating the USGS seismic hazard maps for Alaska
Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.
2015-01-01
The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.
Engineering applications of strong ground motion simulation
NASA Astrophysics Data System (ADS)
Somerville, Paul
1993-02-01
The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.
Testing earthquake source inversion methodologies
Page, M.; Mai, P.M.; Schorlemmer, D.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
NASA Astrophysics Data System (ADS)
Crowell, B.; Melgar, D.
2017-12-01
The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.
Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.
2012-12-01
It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or historical earthquake source area is also applicable for the construction of SMGA settings for strong ground motion prediction for future earthquakes.
Rapid tsunami models and earthquake source parameters: Far-field and local applications
Geist, E.L.
2005-01-01
Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.
A rapid estimation of near field tsunami run-up
Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie
2015-01-01
Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.
NASA Astrophysics Data System (ADS)
Asano, K.
2017-12-01
An MJMA 6.5 earthquake occurred offshore the Kii peninsula, southwest Japan on April 1, 2016. This event was interpreted as a thrust-event on the plate-boundary along the Nankai trough where (Wallace et al., 2016). This event is the largest plate-boundary earthquake in the source region of the 1944 Tonankai earthquake (MW 8.0) after that event. The significant point of this event regarding to seismic observation is that this event occurred beneath an ocean-bottom seismic network called DONET1, which is jointly operated by NIED and JAMSTEC. Since moderate-to-large earthquake of this focal type is very rare in this region in the last half century, it is a good opportunity to investigate the source characteristics relating to strong motion generation of subduction-zone plate-boundary earthquakes along the Nankai trough. Knowledge obtained from the study of this earthquake would contribute to ground motion prediction and seismic hazard assessment for future megathrust earthquakes expected in the Nankai trough. In this study, the source model of the 2016 offshore the Kii peninsula earthquake was estimated by broadband strong motion waveform modeling using the empirical Green's function method (Irikura, 1986). The source model is characterized by strong motion generation area (SMGA) (Miyake et al., 2003), which is defined as a rectangular area with high-stress drop or high slip-velocity. SMGA source model based on the empirical Green's function method has great potential to reproduce ground motion time history in broadband frequency range. We used strong motion data from offshore stations (DONET1 and LTBMS) and onshore stations (NIED F-net and DPRI). The records of an MJMA 3.2 aftershock at 13:04 on April 1, 2016 were selected for the empirical Green's functions. The source parameters of SMGA are optimized by the waveform modeling in the frequency range 0.4-10 Hz. The best estimate of SMGA size is 19.4 km2, and SMGA of this event does not follow the source scaling relationship for past plate-boundary earthquakes along the Japan trench, northeast Japan. This finding implies that the source characteristics of plate-boundary events in the Nankai trough are different from those in the Japan Trench, and it could be important information to consider regional variation in ground motion prediction.
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro
2018-03-01
This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.
Local tsunamis and earthquake source parameters
Geist, Eric L.; Dmowska, Renata; Saltzman, Barry
1999-01-01
This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
NASA Astrophysics Data System (ADS)
Meng, L.; Zhou, L.; Liu, J.
2013-12-01
Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity
NASA Astrophysics Data System (ADS)
Vater, Stefan; Behrens, Jörn
2017-04-01
Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.
NASA Astrophysics Data System (ADS)
Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee
2017-04-01
It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.
NASA Astrophysics Data System (ADS)
Griffin, J.; Clark, D.; Allen, T.; Ghasemi, H.; Leonard, M.
2017-12-01
Standard probabilistic seismic hazard assessment (PSHA) simulates earthquake occurrence as a time-independent process. However paleoseismic studies in slowly deforming regions such as Australia show compelling evidence that large earthquakes on individual faults cluster within active periods, followed by long periods of quiescence. Therefore the instrumental earthquake catalog, which forms the basis of PSHA earthquake recurrence calculations, may only capture the state of the system over the period of the catalog. Together this means that data informing our PSHA may not be truly time-independent. This poses challenges in developing PSHAs for typical design probabilities (such as 10% in 50 years probability of exceedance): Is the present state observed through the instrumental catalog useful for estimating the next 50 years of earthquake hazard? Can paleo-earthquake data, that shows variations in earthquake frequency over time-scales of 10,000s of years or more, be robustly included in such PSHA models? Can a single PSHA logic tree be useful over a range of different probabilities of exceedance? In developing an updated PSHA for Australia, decadal-scale data based on instrumental earthquake catalogs (i.e. alternative area based source models and smoothed seismicity models) is integrated with paleo-earthquake data through inclusion of a fault source model. Use of time-dependent non-homogeneous Poisson models allows earthquake clustering to be modeled on fault sources with sufficient paleo-earthquake data. This study assesses the performance of alternative models by extracting decade-long segments of the instrumental catalog, developing earthquake probability models based on the remaining catalog, and testing performance against the extracted component of the catalog. Although this provides insights into model performance over the short-term, for longer timescales it is recognised that model choice is subject to considerable epistemic uncertainty. Therefore a formal expert elicitation process has been used to assign weights to alternative models for the 2018 update to Australia's national PSHA.
Ching, K.-E.; Rau, R.-J.; Zeng, Y.
2007-01-01
A coseismic source model of the 2003 Mw 6.8 Chengkung, Taiwan, earthquake was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation model. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic model tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip earthquakes and a compilation of finite source models of 18 earthquakes, the Chengkung earthquake generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung earthquake seems to be a transitional example between regular and slow slip earthquakes. The coseismic source model of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 earthquake was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.
Seismic hazard analysis for Jayapura city, Papua
NASA Astrophysics Data System (ADS)
Robiana, R.; Cipta, A.
2015-04-01
Jayapura city had destructive earthquake which occurred on June 25, 1976 with the maximum intensity VII MMI scale. Probabilistic methods are used to determine the earthquake hazard by considering all possible earthquakes that can occur in this region. Earthquake source models using three types of source models are subduction model; comes from the New Guinea Trench subduction zone (North Papuan Thrust), fault models; derived from fault Yapen, TareraAiduna, Wamena, Memberamo, Waipago, Jayapura, and Jayawijaya, and 7 background models to accommodate unknown earthquakes. Amplification factor using geomorphological approaches are corrected by the measurement data. This data is related to rock type and depth of soft soil. Site class in Jayapura city can be grouped into classes B, C, D and E, with the amplification between 0.5 - 6. Hazard maps are presented with a 10% probability of earthquake occurrence within a period of 500 years for the dominant periods of 0.0, 0.2, and 1.0 seconds.
A rapid estimation of tsunami run-up based on finite fault models
NASA Astrophysics Data System (ADS)
Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.
2014-12-01
Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.
NASA Astrophysics Data System (ADS)
OpršAl, Ivo; FäH, Donat; Mai, P. Martin; Giardini, Domenico
2005-04-01
The Basel earthquake of 18 October 1356 is considered one of the most serious earthquakes in Europe in recent centuries (I0 = IX, M ≈ 6.5-6.9). In this paper we present ground motion simulations for earthquake scenarios for the city of Basel and its vicinity. The numerical modeling combines the finite extent pseudodynamic and kinematic source models with complex local structure in a two-step hybrid three-dimensional (3-D) finite difference (FD) method. The synthetic seismograms are accurate in the frequency band 0-2.2 Hz. The 3-D FD is a linear explicit displacement formulation using an irregular rectangular grid including topography. The finite extent rupture model is adjacent to the free surface because the fault has been recognized through trenching on the Reinach fault. We test two source models reminiscent of past earthquakes (the 1999 Athens and the 1989 Loma Prieta earthquake) to represent Mw ≈ 5.9 and Mw ≈ 6.5 events that occur approximately to the south of Basel. To compare the effect of the same wave field arriving at the site from other directions, we considered the same sources placed east and west of the city. The local structural model is determined from the area's recently established P and S wave velocity structure and includes topography. The selected earthquake scenarios show strong ground motion amplification with respect to a bedrock site, which is in contrast to previous 2-D simulations for the same area. In particular, we found that the edge effects from the 3-D structural model depend strongly on the position of the earthquake source within the modeling domain.
New perspectives on self-similarity for shallow thrust earthquakes
NASA Astrophysics Data System (ADS)
Denolle, Marine A.; Shearer, Peter M.
2016-09-01
Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.
The effect of Earth's oblateness on the seismic moment estimation from satellite gravimetry
NASA Astrophysics Data System (ADS)
Dai, Chunli; Guo, Junyi; Shang, Kun; Shum, C. K.; Wang, Rongjiang
2018-05-01
Over the last decade, satellite gravimetry, as a new class of geodetic sensors, has been increasingly studied for its use in improving source model inversion for large undersea earthquakes. When these satellite-observed gravity change data are used to estimate source parameters such as seismic moment, the forward modelling of earthquake seismic deformation is crucial because imperfect modelling could lead to errors in the resolved source parameters. Here, we discuss several modelling issues and focus on one modelling deficiency resulting from the upward continuation of gravity change considering the Earth's oblateness, which is ignored in contemporary studies. For the low degree (degree 60) time-variable gravity solutions from Gravity Recovery and Climate Experiment mission data, the model-predicted gravity change would be overestimated by 9 per cent for the 2011 Tohoku earthquake, and about 6 per cent for the 2010 Maule earthquake. For high degree gravity solutions, the model-predicted gravity change at degree 240 would be overestimated by 30 per cent for the 2011 Tohoku earthquake, resulting in the seismic moment to be systematically underestimated by 30 per cent.
NASA Astrophysics Data System (ADS)
Gok, R.; Hutchings, L.
2004-05-01
We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.
Toward Broadband Source Modeling for the Himalayan Collision Zone
NASA Astrophysics Data System (ADS)
Miyake, H.; Koketsu, K.; Kobayashi, H.; Sharma, B.; Mishra, O. P.; Yokoi, T.; Hayashida, T.; Bhattarai, M.; Sapkota, S. N.
2017-12-01
The Himalayan collision zone is characterized by the significant tectonic setting. There are earthquakes with low-angle thrust faulting as well as continental outerrise earthquakes. Recently several historical earthquakes have been identified by active fault surveys [e.g., Sapkota et al., 2013]. We here investigate source scaling for the Himalayan collision zone as a fundamental factor to construct source models toward seismic hazard assessment. As for the source scaling for collision zones, Yen and Ma [2011] reported the subduction-zone source scaling in Taiwan, and pointed out the non-self-similar scaling due to the finite crustal thickness. On the other hand, current global analyses of stress drop do not show abnormal values for the continental collision zones [e.g., Allmann and Shearer, 2009]. Based on the compile profiling of finite thickness of the curst and dip angle variations, we discuss whether the bending exists for the Himalayan source scaling and implications on stress drop that will control strong ground motions. Due to quite low-angle dip faulting, recent earthquakes in the Himalayan collision zone showed the upper bound of the current source scaling of rupture area vs. seismic moment (< Mw 8.0), and does not show significant bending of the source scaling. Toward broadband source modeling for ground motion prediction, we perform empirical Green's function simulations for the 2009 Butan and 2015 Gorkha earthquake sequence to quantify both long- and short-period source spectral levels.
NASA Astrophysics Data System (ADS)
Stevens, Victoria
2017-04-01
The 2015 Gorkha-Nepal M7.8 earthquake (hereafter known simply as the Gorkha earthquake) highlights the seismic risk in Nepal, allows better characterization of the geometry of the Main Himalayan Thrust (MHT), and enables comparison of recorded ground-motions with predicted ground-motions. These new data, together with recent paleoseismic studies and geodetic-based coupling models, allow for good parameterization of the fault characteristics. Other faults in Nepal remain less well studied. Unlike previous PSHA studies in Nepal that are exclusively area-based, we use a mix of faults and areas to describe six seismic sources in Nepal. For each source, the Gutenberg-Richter a and b values are found, and the maximum magnitude earthquake estimated, using a combination of earthquake catalogs, moment conservation principals and similarities to other tectonic regions. The MHT and Karakoram fault are described as fault sources, whereas four other sources - normal faulting in N-S trending grabens of northern Nepal, strike-slip faulting in both eastern and western Nepal, and background seismicity - are described as area sources. We use OpenQuake (http://openquake.org/) to carry out the analysis, and peak ground acceleration (PGA) at 2 and 10% chance in 50 years is found for Nepal, along with hazard curves at various locations. We compare this PSHA model with previous area-based models of Nepal. The Main Himalayan Thrust is the principal seismic hazard in Nepal so we study the effects of changing several parameters associated with this fault. We compare ground shaking predicted from various fault geometries suggested from the Gorkha earthquake with each other, and with a simple model of a flat fault. We also show the results from incorporating a coupling model based on geodetic data and microseismicity, which limits the down-dip extent of rupture. There have been no ground-motion prediction equations (GMPEs) developed specifically for Nepal, so we compare the results of standard GMPEs used together with an earthquake-scenario representing that of the Gorkha earthquake, with actual data from the Gorkha earthquake itself. The Gorkha earthquake also highlighted the importance of basin-, topographic- and directivity-effects, and the location of high-frequency sources, on influencing ground motion. Future study aims at incorporating the above, together with consideration of the fault-rupture history and its influence on the location and timing of future earthquakes.
Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.
2008-01-01
We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.
NASA Astrophysics Data System (ADS)
Melgar, D.; Bock, Y.; Crowell, B. W.; Haase, J. S.
2013-12-01
Computation of predicted tsunami wave heights and runup in the regions adjacent to large earthquakes immediately after rupture initiation remains a challenging problem. Limitations of traditional seismological instrumentation in the near field which cannot be objectively employed for real-time inversions and the non-unique source inversion results are a major concern for tsunami modelers. Employing near-field seismic, GPS and wave gauge data from the Mw 9.0 2011 Tohoku-oki earthquake, we test the capacity of static finite fault slip models obtained from newly developed algorithms to produce reliable tsunami forecasts. First we demonstrate the ability of seismogeodetic source models determined from combined land-based GPS and strong motion seismometers to forecast near-source tsunamis in ~3 minutes after earthquake origin time (OT). We show that these models, based on land-borne sensors only tend to underestimate the tsunami but are good enough to provide a realistic first warning. We then demonstrate that rapid ingestion of offshore shallow water (100 - 1000 m) wave gauge data significantly improves the model forecasts and possible warnings. We ingest data from 2 near-source ocean-bottom pressure sensors and 6 GPS buoys into the earthquake source inversion process. Tsunami Green functions (tGFs) are generated using the GeoClaw package, a benchmarked finite volume code with adaptive mesh refinement. These tGFs are used for a joint inversion with the land-based data and substantially improve the earthquake source and tsunami forecast. Model skill is assessed by detailed comparisons of the simulation output to 2000+ tsunami runup survey measurements collected after the event. We update the source model and tsunami forecast and warning at 10 min intervals. We show that by 20 min after OT the tsunami is well-predicted with a high variance reduction to the survey data and by ~30 minutes a model that can be considered final, since little changed is observed afterwards, is achieved. This is an indirect approach to tsunami warning, it relies on automatic determination of the earthquake source prior to tsunami simulation. It is more robust than ad-hoc approaches because it relies on computation of a finite-extent centroid moment tensor to objectively determine the style of faulting and the fault plane geometry on which to launch the heterogeneous static slip inversion. Operator interaction and physical assumptions are minimal. Thus, the approach can provide the initial conditions for tsunami simulation (seafloor motion) irrespective of the type of earthquake source and relies heavily on oceanic wave gauge measurements for source determination. It reliably distinguishes among strike-slip, normal and thrust faulting events, all of which have been observed recently to occur in subduction zones and pose distinct tsunami hazards.
Developing a Near Real-time System for Earthquake Slip Distribution Inversion
NASA Astrophysics Data System (ADS)
Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen
2016-04-01
Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.
Earthquake and submarine landslide tsunamis: how can we tell the difference? (Invited)
NASA Astrophysics Data System (ADS)
Tappin, D. R.; Grilli, S. T.; Harris, J.; Geller, R. J.; Masterlark, T.; Kirby, J. T.; Ma, G.; Shi, F.
2013-12-01
Several major recent events have shown the tsunami hazard from submarine mass failures (SMF), i.e., submarine landslides. In 1992 a small earthquake triggered landslide generated a tsunami over 25 meters high on Flores Island. In 1998 another small, earthquake-triggered, sediment slump-generated tsunami up to 15 meters high devastated the local coast of Papua New Guinea killing 2,200 people. It was this event that led to the recognition of the importance of marine geophysical data in mapping the architecture of seabed sediment failures that could be then used in modeling and validating the tsunami generating mechanism. Seabed mapping of the 2004 Indian Ocean earthquake rupture zone demonstrated, however, that large, if not great, earthquakes do not necessarily cause major seabed failures, but that along some convergent margins frequent earthquakes result in smaller sediment failures that are not tsunamigenic. Older events, such as Messina, 1908, Makran, 1945, Alaska, 1946, and Java, 2006, all have the characteristics of SMF tsunamis, but for these a SMF source has not been proven. When the 2011 tsunami struck Japan, it was generally assumed that it was directly generated by the earthquake. The earthquake has some unusual characteristics, such as a shallow rupture that is somewhat slow, but is not a 'tsunami earthquake.' A number of simulations of the tsunami based on an earthquake source have been published, but in general the best results are obtained by adjusting fault rupture models with tsunami wave gauge or other data so, to the extent that they can model the recorded tsunami data, this demonstrates self-consistency rather than validation. Here we consider some of the existing source models of the 2011 Japan event and present new tsunami simulations based on a combination of an earthquake source and an SMF mapped from offshore data. We show that the multi-source tsunami agrees well with available tide gauge data and field observations and the wave data from offshore buoys, and that the SMF generated the large runups in the Sanriku region (northern Tohoku). Our new results for the 2011 Tohoku event suggest that care is required in using tsunami wave and tide gauge data to both model and validate earthquake tsunami sources. They also suggest a potential pitfall in the use of tsunami waveform inversion from tide gauges and buoys to estimate the size and spatial characteristics of earthquake rupture. If the tsunami source has a significant SMF component such studies may overestimate earthquake magnitude. Our seabed mapping identifies other large SMFs off Sanriku that have the potential to generate significant tsunamis and which should be considered in future analyses of the tsunami hazard in Japan. The identification of two major SMF-generated tsunamis (PNG and Tohoku), especially one associated with a M9 earthquake, is important in guiding future efforts at forecasting and mitigating the tsunami hazard from large megathrust plus SMF events both in Japan and globally.
Stress Drop and Depth Controls on Ground Motion From Induced Earthquakes
NASA Astrophysics Data System (ADS)
Baltay, A.; Rubinstein, J. L.; Terra, F. M.; Hanks, T. C.; Herrmann, R. B.
2015-12-01
Induced earthquakes in the central United States pose a risk to local populations, but there is not yet agreement on how to portray their hazard. A large source of uncertainty in the hazard arises from ground motion prediction, which depends on the magnitude and distance of the causative earthquake. However, ground motion models for induced earthquakes may be very different than models previously developed for either the eastern or western United States. A key question is whether ground motions from induced earthquakes are similar to those from natural earthquakes, yet there is little history of natural events in the same region with which to compare the induced ground motions. To address these problems, we explore how earthquake source properties, such as stress drop or depth, affect the recorded ground motion of induced earthquakes. Typically, due to stress drop increasing with depth, ground motion prediction equations model shallower events to have smaller ground motions, when considering the same absolute hypocentral distance to the station. Induced earthquakes tend to occur at shallower depths, with respect to natural eastern US earthquakes, and may also exhibit lower stress drops, which begs the question of how these two parameters interact to control ground motion. Can the ground motions of induced earthquakes simply be understood by scaling our known source-ground motion relations to account for the shallow depth or potentially smaller stress drops of these induced earthquakes, or is there an inherently different mechanism in play for these induced earthquakes? We study peak ground-motion velocity (PGV) and acceleration (PGA) from induced earthquakes in Oklahoma and Kansas, recorded by USGS networks at source-station distances of less than 20 km, in order to model the source effects. We compare these records to those in both the NGA-West2 database (primarily from California) as well as NGA-East, which covers the central and eastern United States and Canada. Preliminary analysis indicates that the induced ground motions appear similar to those from the NGA-West2 database. However, upon consideration of their shallower depths, ground motion behavior from induced events seems to fall in between the West data and that of NGA-East, so we explore the control of stress drop and depth on ground motion in more detail.
Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling
Safak, Erdal
1989-01-01
This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.
Seismic hazard analysis for Jayapura city, Papua
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robiana, R., E-mail: robiana-geo104@yahoo.com; Cipta, A.
Jayapura city had destructive earthquake which occurred on June 25, 1976 with the maximum intensity VII MMI scale. Probabilistic methods are used to determine the earthquake hazard by considering all possible earthquakes that can occur in this region. Earthquake source models using three types of source models are subduction model; comes from the New Guinea Trench subduction zone (North Papuan Thrust), fault models; derived from fault Yapen, TareraAiduna, Wamena, Memberamo, Waipago, Jayapura, and Jayawijaya, and 7 background models to accommodate unknown earthquakes. Amplification factor using geomorphological approaches are corrected by the measurement data. This data is related to rock typemore » and depth of soft soil. Site class in Jayapura city can be grouped into classes B, C, D and E, with the amplification between 0.5 – 6. Hazard maps are presented with a 10% probability of earthquake occurrence within a period of 500 years for the dominant periods of 0.0, 0.2, and 1.0 seconds.« less
NASA Astrophysics Data System (ADS)
Isken, Marius P.; Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Bathke, Hannes M.
2017-04-01
We present a modular open-source software framework (pyrocko, kite, grond; http://pyrocko.org) for rapid InSAR data post-processing and modelling of tectonic and volcanic displacement fields derived from satellite data. Our aim is to ease and streamline the joint optimisation of earthquake observations from InSAR and GPS data together with seismological waveforms for an improved estimation of the ruptures' parameters. Through this approach we can provide finite models of earthquake ruptures and therefore contribute to a timely and better understanding of earthquake kinematics. The new kite module enables a fast processing of unwrapped InSAR scenes for source modelling: the spatial sub-sampling and data error/noise estimation for the interferogram is evaluated automatically and interactively. The rupture's near-field surface displacement data are then combined with seismic far-field waveforms and jointly modelled using the pyrocko.gf framwork, which allows for fast forward modelling based on pre-calculated elastodynamic and elastostatic Green's functions. Lastly the grond module supplies a bootstrap-based probabilistic (Monte Carlo) joint optimisation to estimate the parameters and uncertainties of a finite-source earthquake rupture model. We describe the developed and applied methods as an effort to establish a semi-automatic processing and modelling chain. The framework is applied to Sentinel-1 data from the 2016 Central Italy earthquake sequence, where we present the earthquake mechanism and rupture model from which we derive regions of increased coulomb stress. The open source software framework is developed at GFZ Potsdam and at the University of Kiel, Germany, it is written in Python and C programming languages. The toolbox architecture is modular and independent, and can be utilized flexibly for a variety of geophysical problems. This work is conducted within the BridGeS project (http://www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.
Three-dimensional ground-motion simulations of earthquakes for the Hanford area, Washington
Frankel, Arthur; Thorne, Paul; Rohay, Alan
2014-01-01
This report describes the results of ground-motion simulations of earthquakes using three-dimensional (3D) and one-dimensional (1D) crustal models conducted for the probabilistic seismic hazard assessment (PSHA) of the Hanford facility, Washington, under the Senior Seismic Hazard Analysis Committee (SSHAC) guidelines. The first portion of this report demonstrates that the 3D seismic velocity model for the area produces synthetic seismograms with characteristics (spectral response values, duration) that better match those of the observed recordings of local earthquakes, compared to a 1D model with horizontal layers. The second part of the report compares the response spectra of synthetics from 3D and 1D models for moment magnitude (M) 6.6–6.8 earthquakes on three nearby faults and for a dipping plane wave source meant to approximate regional S-waves from a Cascadia great earthquake. The 1D models are specific to each site used for the PSHA. The use of the 3D model produces spectral response accelerations at periods of 0.5–2.0 seconds as much as a factor of 4.5 greater than those from the 1D models for the crustal fault sources. The spectral accelerations of the 3D synthetics for the Cascadia plane-wave source are as much as a factor of 9 greater than those from the 1D models. The differences between the spectral accelerations for the 3D and 1D models are most pronounced for sites with thicker supra-basalt sediments and for stations with earthquakes on the Rattlesnake Hills fault and for the Cascadia plane-wave source.
NASA Astrophysics Data System (ADS)
Rolland, Lucie M.; Vergnolle, Mathilde; Nocquet, Jean-Mathieu; Sladen, Anthony; Dessa, Jean-Xavier; Tavakoli, Farokh; Nankali, Hamid Reza; Cappa, FréDéRic
2013-06-01
It has previously been suggested that ionospheric perturbations triggered by large dip-slip earthquakes might offer additional source parameter information compared to the information gathered from land observations. Based on 3D modeling of GPS- and GLONASS-derived total electron content signals recorded during the 2011 Van earthquake (thrust, intra-plate event, Mw = 7.1, Turkey), we confirm that coseismic ionospheric signals do contain important information about the earthquake source, namely its slip mode. Moreover, we show that part of the ionospheric signal (initial polarity and amplitude distribution) is not related to the earthquake source, but is instead controlled by the geomagnetic field and the geometry of the Global Navigation Satellite System satellites constellation. Ignoring these non-tectonic effects would lead to an incorrect description of the earthquake source. Thus, our work emphasizes the added caution that should be used when analyzing ionospheric signals for earthquake source studies.
NASA Astrophysics Data System (ADS)
Rolland, L. M.; Vergnolle, M.; Nocquet, J.; Sladen, A.; Dessa, J.; Tavakoli, F.; Nankali, H.; Cappa, F.
2013-12-01
It has previously been suggested that ionospheric perturbations triggered by large dip-slip earthquakes might offer additional source parameter information compared to the information gathered from land observations. Based on 3D modeling of GPS- and GLONASS-derived total electron content signals recorded during the 2011 Van earthquake (thrust, intra-plate event, Mw = 7.1, Turkey), we confirm that coseismic ionospheric signals do contain important information about the earthquake source, namely its slip mode. Moreover, we show that part of the ionospheric signal (initial polarity and amplitude distribution) is not related to the earthquake source, but is instead controlled by the geomagnetic field and the geometry of the Global Navigation Satellite System satellites constellation. Ignoring these non-tectonic effects would lead to an incorrect description of the earthquake source. Thus, our work emphasizes the added caution that should be used when analyzing ionospheric signals for earthquake source studies.
Anomalies of rupture velocity in deep earthquakes
NASA Astrophysics Data System (ADS)
Suzuki, M.; Yagi, Y.
2010-12-01
Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.
NASA Astrophysics Data System (ADS)
Ichinose, Gene Aaron
The source parameters for eastern California and western Nevada earthquakes are estimated from regionally recorded seismograms using a moment tensor inversion. We use the point source approximation and fit the seismograms, at long periods. We generated a moment tensor catalog for Mw > 4.0 since 1997 and Mw > 5.0 since 1990. The catalog includes centroid depths, seismic moments, and focal mechanisms. The regions with the most moderate sized earthquakes in the last decade were in aftershock zones located in Eureka Valley, Double Spring Flat, Coso, Ridgecrest, Fish Lake Valley, and Scotty's Junction. The remaining moderate size earthquakes were distributed across the region. The 1993 (Mw 6.0) Eureka Valley earthquake occurred in the Eastern California Shear Zone. Careful aftershock relocations were used to resolve structure from aftershock clusters. The mainshock appears to rupture along the western side of the Last Change Range along a 30° to 60° west dipping fault plane, consistent with previous geodetic modeling. We estimate the source parameters for aftershocks at source-receiver distances less than 20 km using waveform modeling. The relocated aftershocks and waveform modeling results do not indicate any significant evidence of low angle faulting (dips > 30°. The results did reveal deformation along vertical faults within the hanging-wall block, consistent with observed surface rupture along the Saline Range above the dipping fault plane. The 1994 (Mw 5.8) Double Spring Flat earthquake occurred along the eastern Sierra Nevada between overlapping normal faults. Aftershock migration and cross fault triggering occurred in the following two years, producing seventeen Mw > 4 aftershocks The source parameters for the largest aftershocks were estimated from regionally recorded seismograms using moment tensor inversion. We estimate the source parameters for two moderate sized earthquakes which occurred near Reno, Nevada, the 1995 (Mw 4.4) Border Town, and the 1998 (Mw 4.7) Incline Village Earthquakes. We test to see how such stress interactions affected a cluster of six large earthquakes (Mw 6.6 to 7.5) between 1915 to 1954 within the Central Nevada Seismic Belt. We compute the static stress changes for these earthquake using dislocation models based on the location and amount of surface rupture. (Abstract shortened by UMI.)
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf
2016-01-01
Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.
Earthquake Forecasting in Northeast India using Energy Blocked Model
NASA Astrophysics Data System (ADS)
Mohapatra, A. K.; Mohanty, D. K.
2009-12-01
In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes. The proposed process provides a more consistent model of gradual accumulation of strain and non-uniform release through large earthquakes and can be applied in the evaluation of seismic risk. The cumulative seismic energy released by major earthquakes throughout the period from 1897 to 2007 of last 110 years in the all the zones are calculated and plotted. The plot gives characteristics curve for each zone. Each curve is irregular, reflecting occasional high activity. The maximum earthquake energy available at a particular time in a given area is given by S. The difference between the theoretical upper limit given by S and the cumulative energy released up to that time is calculated to find out the maximum magnitude of an earthquake which can occur in future. Energy blocked of the three source regions are 1.35*1017 Joules, 4.25*1017 Joules and 0.12*1017 in Joules respectively for source zone 1, 2 and 3, as a supply for potential earthquakes in due course of time. The predicted maximum magnitude (mmax) obtained for each source zone AYZ, HZ, and SPZ are 8.2, 8.6, and 8.4 respectively by this model. This study is also consistent with the previous predicted results by other workers.
Probabilistic Seismic Hazard Maps for Ecuador
NASA Astrophysics Data System (ADS)
Mariniere, J.; Beauval, C.; Yepes, H. A.; Laurence, A.; Nocquet, J. M.; Alvarado, A. P.; Baize, S.; Aguilar, J.; Singaucho, J. C.; Jomard, H.
2017-12-01
A probabilistic seismic hazard study is led for Ecuador, a country facing a high seismic hazard, both from megathrust subduction earthquakes and shallow crustal moderate to large earthquakes. Building on the knowledge produced in the last years in historical seismicity, earthquake catalogs, active tectonics, geodynamics, and geodesy, several alternative earthquake recurrence models are developed. An area source model is first proposed, based on the seismogenic crustal and inslab sources defined in Yepes et al. (2016). A slightly different segmentation is proposed for the subduction interface, with respect to Yepes et al. (2016). Three earthquake catalogs are used to account for the numerous uncertainties in the modeling of frequency-magnitude distributions. The hazard maps obtained highlight several source zones enclosing fault systems that exhibit low seismic activity, not representative of the geological and/or geodetical slip rates. Consequently, a fault model is derived, including faults with an earthquake recurrence model inferred from geological and/or geodetical slip rate estimates. The geodetical slip rates on the set of simplified faults are estimated from a GPS horizontal velocity field (Nocquet et al. 2014). Assumptions on the aseismic component of the deformation are required. Combining these alternative earthquake models in a logic tree, and using a set of selected ground-motion prediction equations adapted to Ecuador's different tectonic contexts, a mean hazard map is obtained. Hazard maps corresponding to the percentiles 16 and 84% are also derived, highlighting the zones where uncertainties on the hazard are highest.
A Model For Rapid Estimation of Economic Loss
NASA Astrophysics Data System (ADS)
Holliday, J. R.; Rundle, J. B.
2012-12-01
One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.
Petersen, M.D.; Dewey, J.; Hartzell, S.; Mueller, C.; Harmsen, S.; Frankel, A.D.; Rukstales, K.
2004-01-01
The ground motion hazard for Sumatra and the Malaysian peninsula is calculated in a probabilistic framework, using procedures developed for the US National Seismic Hazard Maps. We constructed regional earthquake source models and used standard published and modified attenuation equations to calculate peak ground acceleration at 2% and 10% probability of exceedance in 50 years for rock site conditions. We developed or modified earthquake catalogs and declustered these catalogs to include only independent earthquakes. The resulting catalogs were used to define four source zones that characterize earthquakes in four tectonic environments: subduction zone interface earthquakes, subduction zone deep intraslab earthquakes, strike-slip transform earthquakes, and intraplate earthquakes. The recurrence rates and sizes of historical earthquakes on known faults and across zones were also determined from this modified catalog. In addition to the source zones, our seismic source model considers two major faults that are known historically to generate large earthquakes: the Sumatran subduction zone and the Sumatran transform fault. Several published studies were used to describe earthquakes along these faults during historical and pre-historical time, as well as to identify segmentation models of faults. Peak horizontal ground accelerations were calculated using ground motion prediction relations that were developed from seismic data obtained from the crustal interplate environment, crustal intraplate environment, along the subduction zone interface, and from deep intraslab earthquakes. Most of these relations, however, have not been developed for large distances that are needed for calculating the hazard across the Malaysian peninsula, and none were developed for earthquake ground motions generated in an interplate tectonic environment that are propagated into an intraplate tectonic environment. For the interplate and intraplate crustal earthquakes, we have applied ground-motion prediction relations that are consistent with California (interplate) and India (intraplate) strong motion data that we collected for distances beyond 200 km. For the subduction zone equations, we recognized that the published relationships at large distances were not consistent with global earthquake data that we collected and modified the relations to be compatible with the global subduction zone ground motions. In this analysis, we have used alternative source and attenuation models and weighted them to account for our uncertainty in which model is most appropriate for Sumatra or for the Malaysian peninsula. The resulting peak horizontal ground accelerations for 2% probability of exceedance in 50 years range from over 100% g to about 10% g across Sumatra and generally less than 20% g across most of the Malaysian peninsula. The ground motions at 10% probability of exceedance in 50 years are typically about 60% of the ground motions derived for a hazard level at 2% probability of exceedance in 50 years. The largest contributors to hazard are from the Sumatran faults.
Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting
NASA Astrophysics Data System (ADS)
Donovan, J.; Jordan, T. H.
2011-12-01
Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock probabilities? The FMT representation allows us to generalize the models typically used for this purpose (e.g., marked point process models, such as ETAS), which will again be necessary in operational earthquake forecasting. To quantify aftershock probabilities, we compare mainshock FMTs with the first and second spatial moments of weighted aftershock hypocenters. We will describe applications of these results to the Uniform California Earthquake Rupture Forecast, version 3, which is now under development by the Working Group on California Earthquake Probabilities.
Using Socioeconomic Data to Calibrate Loss Estimates
NASA Astrophysics Data System (ADS)
Holliday, J. R.; Rundle, J. B.
2013-12-01
One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.
Hazard assessment of long-period ground motions for the Nankai Trough earthquakes
NASA Astrophysics Data System (ADS)
Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.
2013-12-01
We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.
Source characterization and dynamic fault modeling of induced seismicity
NASA Astrophysics Data System (ADS)
Lui, S. K. Y.; Young, R. P.
2017-12-01
In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.
Modeling potential tsunami sources for deposits near Unalaska Island, Aleutian Islands
NASA Astrophysics Data System (ADS)
La Selle, S.; Gelfenbaum, G. R.
2013-12-01
In regions with little seismic data and short historical records of earthquakes, we can use preserved tsunami deposits and tsunami modeling to infer if, when and where tsunamigenic earthquakes have occurred. The Aleutian-Alaska subduction zone in the region offshore of Unalaska Island is one such region where the historical and paleo-seismicity is poorly understood. This section of the subduction zone is not thought to have ruptured historically in a large earthquake, leading some to designate the region as a seismic gap. By modeling various historical and synthetic earthquake sources, we investigate whether or not tsunamis that left deposits near Unalaska Island were generated by earthquakes rupturing through Unalaska Gap. Preliminary field investigations near the eastern end of Unalaska Island have identified paleotsunami deposits well above sea level, suggesting that multiple tsunamis in the last 5,000 years have flooded low-lying areas over 1 km inland. Other indicators of tsunami inundation, such as a breached cobble beach berm and driftwood logs stranded far inland, were tentatively attributed to the March 9, 1957 tsunami, which had reported runup of 13 to 22 meters on Umnak and Unimak Islands, to the west and east of Unalaska. In order to determine if tsunami inundation could have reached the runup markers observed on Unalaska, we modeled the 1957 tsunami using GeoCLAW, a numerical model that simulates tsunami generation, propagation, and inundation. The published rupture orientation and slip distribution for the MW 8.6, 1957 earthquake (Johnson et al., 1994) was used as the tsunami source, which delineates a 1200 km long rupture zone along the Aleutian trench from Delarof Island to Unimak Island. Model results indicate that runup and inundation from this particular source are too low to account for the runup markers observed in the field, because slip is concentrated in the western half of the rupture zone, far from Unalaska. To ascertain if any realistic, earthquake-generated tsunami could account for the observed runup, we modeled tsunami inundation from synthetic MW 9.2 earthquakes rupturing along the trench between Atka and Unimak Islands, which indicate that the deposit runup observed on Unalaska is possible from a source of this size and orientation. Further modeling efforts will examine the April 1, 1946 Aleutian tsunami, as well as other synthetic tsunamigenic earthquake sources of varying size and location, which may provide insight into the rupture history of the Aleutian-Alaska subduction zone, especially in combination with more data from paleotsunami deposits. Johnson, Jean M., Tanioka, Yuichiro, Ruff, Larry J., Satake, Kenji, Kanamori, Hiroo, Sykes, Lynn R. "The 1957 great Aleutian earthquake." Pure and Applied Geophysics 142.1 (1994): 3-28.
NASA Astrophysics Data System (ADS)
Trugman, Daniel Taylor
The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of southern California seismicity. Chapter 6 builds upon these results and applies the same spectral decomposition technique to examine the source properties of several thousand recent earthquakes in southern Kansas that are likely human-induced by massive oil and gas operations in the region. Chapter 7 studies the connection between source spectral properties and earthquake hazard, focusing on spatial variations in dynamic stress drop and its influence on ground motion amplitudes. Finally, Chapter 8 provides a summary of the key findings of and relations between these studies, and outlines potential avenues of future research.
NASA Astrophysics Data System (ADS)
Garagash, I. A.; Lobkovsky, L. I.; Mazova, R. Kh.
2012-04-01
The study of generation of strongest earthquakes with upper-value magnitude (near above 9) and induced by them catastrophic tsunamis, is performed by authors on the basis of new approach to the generation process, occurring in subduction zones under earthquake. The necessity of performing of such studies is connected with recent 11 March 2011 catastrophic underwater earthquake close to north-east Japan coastline and following it catastrophic tsunami which had led to vast victims and colossal damage for Japan. The essential importance in this study is determined by unexpected for all specialists the strength of earthquake occurred (determined by magnitude M = 9), inducing strongest tsunami with wave height runup on the beach up to 10 meters. The elaborated by us model of interaction of ocean lithosphere with island-arc blocks in subduction zones, with taking into account of incomplete stress discharge at realization of seismic process and further accumulation of elastic energy, permits to explain arising of strongest mega-earthquakes, such as catastrophic earthquake with source in Japan deep-sea trench in March, 2011. In our model, the wide possibility for numerical simulation of dynamical behaviour of underwater seismic source is provided by kinematical model of seismic source as well as by elaborated by authors numerical program for calculation of tsunami wave generation by dynamical and kinematical seismic sources. The method obtained permits take into account the contribution of residual tectonic stress in lithosphere plates, leading to increase of earthquake energy, which is usually not taken into account up to date.
GPS-derived Coseismic deformations of the 2016 Aktao Ms6.7 earthquake and source modelling
NASA Astrophysics Data System (ADS)
Li, J.; Zhao, B.; Xiaoqiang, W.; Daiqing, L.; Yushan, A.
2017-12-01
On 25th November 2016, a Ms6.7 earthquake occurred on Aktao, a county of Xinjiang, China. This earthquake was the largest earthquake occurred in the northeastern margin of the Pamir Plateau in the last 30 years. By GPS observation, we get the coseismic displacement of this earthquake. The maximum displacement site is located in the Muji Basin, 15km from south of the causative fault. The maximum deformation is down to 0.12m, and 0.10m for coseismic displacement, our results indicate that the earthquake has the characteristics of dextral strike-slip and normal-fault rupture. Based on the GPS results, we inverse the rupture distribution of the earthquake. The source model is consisted of two approximate independent zones with a depth of less than 20km, the maximum displacement of one zone is 0.6m, the other is 0.4m. The total seismic moment is Mw6.6.1 which is calculated by the geodetic inversion. The source model of GPS-derived is basically consistent with that of seismic waveform inversion, and is consistent with the surface rupture distribution obtained from field investigation. According to our inversion calculation, the recurrence period of strong earthquakes similar to this earthquake should be 30 60 years, and the seismic risk of the eastern segment of Muji fault is worthy of attention. This research is financially supported by National Natural Science Foundation of China (Grant No.41374030)
Seismic hazard assessment over time: Modelling earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting
2017-04-01
To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.
NASA Astrophysics Data System (ADS)
Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.
2017-12-01
Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.
NASA Astrophysics Data System (ADS)
Bahng, B.; Whitmore, P.; Macpherson, K. A.; Knight, W. R.
2016-12-01
The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes or other mechanisms in either the Pacific Ocean, Atlantic Ocean or Gulf of Mexico. At the U.S. National Tsunami Warning Center (NTWC), the use of the model has been mainly for tsunami pre-computation due to earthquakes. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. The model has also been used for tsunami hindcasting due to submarine landslides and due to atmospheric pressure jumps, but in a very case-specific and somewhat limited manner. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves approach coastal waters. The shallow-water wave physics is readily applicable to all of the above tsunamis as well as to tides. Recently, the model has been expanded to include multiple forcing mechanisms in a systematic fashion, and to enhance the model physics for non-earthquake events.ATFM is now able to handle multiple source mechanisms, either individually or jointly, which include earthquake, submarine landslide, meteo-tsunami and tidal forcing. As for earthquakes, the source can be a single unit source or multiple, interacting source blocks. Horizontal slip contribution can be added to the sea-floor displacement. The model now includes submarine landslide physics, modeling the source either as a rigid slump, or as a viscous fluid. Additional shallow-water physics have been implemented for the viscous submarine landslides. With rigid slumping, any trajectory can be followed. As for meteo-tsunami, the forcing mechanism is capable of following any trajectory shape. Wind stress physics has also been implemented for the meteo-tsunami case, if required. As an example of multiple sources, a near-field model of the tsunami produced by a combination of earthquake and submarine landslide forcing which happened in Papua New Guinea on July 17, 1998 is provided.
Source Parameters and Rupture Directivities of Earthquakes Within the Mendocino Triple Junction
NASA Astrophysics Data System (ADS)
Allen, A. A.; Chen, X.
2017-12-01
The Mendocino Triple Junction (MTJ), a region in the Cascadia subduction zone, produces a sizable amount of earthquakes each year. Direct observations of the rupture properties are difficult to achieve due to the small magnitudes of most of these earthquakes and lack of offshore observations. The Cascadia Initiative (CI) project provides opportunities to look at the earthquakes in detail. Here we look at the transform plate boundary fault located in the MTJ, and measure source parameters of Mw≥4 earthquakes from both time-domain deconvolution and spectral analysis using empirical Green's function (EGF) method. The second-moment method is used to infer rupture length, width, and rupture velocity from apparent source duration measured at different stations. Brune's source model is used to infer corner frequency and spectral complexity for stacked spectral ratio. EGFs are selected based on their location relative to the mainshock, as well as the magnitude difference compared to the mainshock. For the transform fault, we first look at the largest earthquake recorded during the Year 4 CI array, a Mw5.72 event that occurred in January of 2015, and select two EGFs, a Mw1.75 and a Mw1.73 located within 5 km of the mainshock. This earthquake is characterized with at least two sub-events, with total duration of about 0.3 second and rupture length of about 2.78 km. The earthquake is rupturing towards west along the transform fault, and both source durations and corner frequencies show strong azimuthal variations, with anti-correlation between duration and corner frequency. The stacked spectral ratio from multiple stations with the Mw1.73 EGF event shows deviation from pure Brune's source model following the definition from Uchide and Imanishi [2016], likely due to near-field recordings with rupture complexity. We will further analyze this earthquake using more EGF events to test the reliability and stability of the results, and further analyze three other Mw≥4 earthquakes within the array.
An Earthquake Rupture Forecast model for central Italy submitted to CSEP project
NASA Astrophysics Data System (ADS)
Pace, B.; Peruzza, L.
2009-04-01
We defined a seismogenic source model for central Italy and computed the relative forecast scenario, in order to submit the results to the CSEP (Collaboratory for the study of Earthquake Predictability, www.cseptesting.org) project. The goal of CSEP project is developing a virtual, distributed laboratory that supports a wide range of scientific prediction experiments in multiple regional or global natural laboratories, and Italy is the first region in Europe for which fully prospective testing is planned. The model we propose is essentially the Layered Seismogenic Source for Central Italy (LaSS-CI) we published in 2006 (Pace et al., 2006). It is based on three different layers of sources: the first one collects the individual faults liable to generate major earthquakes (M >5.5); the second layer is given by the instrumental seismicity analysis of the past two decades, which allows us to evaluate the background seismicity (M ~<5.0). The third layer utilizes all the instrumental earthquakes and the historical events not correlated to known structures (4.5
Adjoint-tomography for a Local Surface Structure: Methodology and a Blind Test
NASA Astrophysics Data System (ADS)
Kubina, Filip; Michlik, Filip; Moczo, Peter; Kristek, Jozef; Stripajova, Svetlana
2017-04-01
We have developed a multiscale full-waveform adjoint-tomography method for local surface sedimentary structures with complicated interference wavefields. The local surface sedimentary basins and valleys are often responsible for anomalous earthquake ground motions and corresponding damage in earthquakes. In many cases only relatively small number of records of a few local earthquakes is available for a site of interest. Consequently, prediction of earthquake ground motion at the site has to include numerical modeling for a realistic model of the local structure. Though limited, the information about the local structure encoded in the records is important and irreplaceable. It is therefore reasonable to have a method capable of using the limited information in records for improving a model of the local structure. A local surface structure and its interference wavefield require a specific multiscale approach. In order to verify our inversion method, we performed a blind test. We obtained synthetic seismograms at 8 receivers for 2 local sources, complete description of the sources, positions of the receivers and material parameters of the bedrock. We considered the simplest possible starting model - a homogeneous halfspace made of the bedrock. Using our inversion method we obtained an inverted model. Given the starting model, synthetic seismograms simulated for the inverted model are surprisingly close to the synthetic seismograms simulated for the true structure in the target frequency range up to 4.5 Hz. We quantify the level of agreement between the true and inverted seismograms using the L2 and time-frequency misfits, and, more importantly for earthquake-engineering applications, also using the goodness-of-fit criteria based on the earthquake-engineering characteristics of earthquake ground motion. We also verified the inverted model for other source-receiver configurations not used in the inversion.
Rate/state Coulomb stress transfer model for the CSEP Japan seismicity forecast
NASA Astrophysics Data System (ADS)
Toda, Shinji; Enescu, Bogdan
2011-03-01
Numerous studies retrospectively found that seismicity rate jumps (drops) by coseismic Coulomb stress increase (decrease). The Collaboratory for the Study of Earthquake Prediction (CSEP) instead provides us an opportunity for prospective testing of the Coulomb hypothesis. Here we adapt our stress transfer model incorporating rate and state dependent friction law to the CSEP Japan seismicity forecast. We demonstrate how to compute the forecast rates of large shocks in 2009 using the large earthquakes during the past 120 years. The time dependent impact of the coseismic stress perturbations explains qualitatively well the occurrence of the recent moderate size shocks. Such ability is partly similar to that of statistical earthquake clustering models. However, our model differs from them as follows: the off-fault aftershock zones can be simulated using finite fault sources; the regional areal patterns of triggered seismicity are modified by the dominant mechanisms of the potential sources; the imparted stresses due to large earthquakes produce stress shadows that lead to a reduction of the forecasted number of earthquakes. Although the model relies on several unknown parameters, it is the first physics based model submitted to the CSEP Japan test center and has the potential to be tuned for short-term earthquake forecasts.
NASA Astrophysics Data System (ADS)
Mourhatch, Ramses
This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis. As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California. Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2s-2.0s) empirical Green's function synthetics on top of long-period (> 2.0s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms. Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.
NASA Astrophysics Data System (ADS)
Kubota, T.; Hino, R.; Inazu, D.; Saito, T.; Iinuma, T.; Suzuki, S.; Ito, Y.; Ohta, Y.; Suzuki, K.
2012-12-01
We estimated source models of small amplitude tsunami associated with M-7 class earthquakes in the rupture area of the 2011 Tohoku-Oki Earthquake using near-field records of tsunami recorded by ocean bottom pressure gauges (OBPs). The largest (Mw=7.3) foreshock of the Tohoku-Oki earthquake, occurred on 9 Mar., two days before the mainshock. Tsunami associated with the foreshock was clearly recorded by seven OBPs, as well as coseismic vertical deformation of the seafloor. Assuming a planer fault along the plate boundary as a source, the OBP records were inverted for slip distribution. As a result, the most of the coseismic slip was found to be concentrated in the area of about 40 x 40 km in size and located to the north-west of the epicenter, suggesting downdip rupture propagation. Seismic moment of our tsunami waveform inversion is 1.4 x 10^20 Nm, equivalent to Mw 7.3. On 2011 July 10th, an earthquake of Mw 7.0 occurred near the hypocenter of the mainshock. Its relatively deep focus and strike-slip focal mechanism indicate that this earthquake was an intraslab earthquake. The earthquake was associated with small amplitude tsunami. By using the OBP records, we estimated a model of the initial sea-surface height distribution. Our tsunami inversion showed that a pair of uplift/subsiding eyeballs was required to explain the observed tsunami waveform. The spatial pattern of the seafloor deformation is consistent with the oblique strike-slip solution obtained by the seismic data analyses. The location and strike of the hinge line separating the uplift and subsidence zones correspond well to the linear distribution of the aftershock determined by using local OBS data (Obana et al., 2012).
Low-frequency source parameters of twelve large earthquakes. M.S. Thesis
NASA Technical Reports Server (NTRS)
Harabaglia, Paolo
1993-01-01
A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
NASA Astrophysics Data System (ADS)
Amertha Sanjiwani, I. D. M.; En, C. K.; Anjasmara, I. M.
2017-12-01
A seismic gap on the interface along the Sunda subduction zone has been proposed among the 2000, 2004, 2005 and 2007 great earthquakes. This seismic gap therefore plays an important role in the earthquake risk on the Sunda trench. The Mw 7.6 Padang earthquake, an intraslab event, was occurred on September 30, 2009 located at ± 250 km east of the Sunda trench, close to the seismic gap on the interface. To understand the interaction between the seismic gap and the Padang earthquake, twelves continuous GPS data from SUGAR are adopted in this study to estimate the source model of this event. The daily GPS coordinates one month before and after the earthquake were calculated by the GAMIT software. The coseismic displacements were evaluated based on the analysis of coordinate time series in Padang region. This geodetic network provides a rather good spatial coverage for examining the seismic source along the Padang region in detail. The general pattern of coseismic horizontal displacements is moving toward epicenter and also the trench. The coseismic vertical displacement pattern is uplift. The highest coseismic displacement derived from the MSAI station are 35.0 mm for horizontal component toward S32.1°W and 21.7 mm for vertical component. The second largest one derived from the LNNG station are 26.6 mm for horizontal component toward N68.6°W and 3.4 mm for vertical component. Next, we will use uniform stress drop inversion to invert the coseismic displacement field for estimating the source model. Then the relationship between the seismic gap on the interface and the intraslab Padang earthquake will be discussed in the next step. Keyword: seismic gap, Padang earthquake, coseismic displacement.
PAGER-CAT: A composite earthquake catalog for calibrating global fatality models
Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.
2009-01-01
We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are highly uncertain, particularly the casualty numbers, which must be regarded as estimates rather than firm numbers for many earthquakes. Consequently, we encourage contributions from the seismology and earthquake engineering communities to further improve this resource via the Wikipedia page and personal communications, for the benefit of the whole community.
Characteristics of broadband slow earthquakes explained by a Brownian model
NASA Astrophysics Data System (ADS)
Ide, S.; Takeo, A.
2017-12-01
Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic, rather than random (Rubin and Armbruster, 2013; Bostock et al., 2015). Since even spatially characteristic source must be activated randomly in time, moment release from these sources are compatible to the fluctuation in BSE model. Therefore, BSE model contains the model of Gomberg et al. (2016), which suggests that the cluster of LFEs makes VLF signals, as a special case.
NASA Astrophysics Data System (ADS)
Major, J. R.; Liu, Z.; Harris, R. A.; Fisher, T. L.
2011-12-01
Using Dutch records of geophysical events in Indonesia over the past 400 years, and tsunami modeling, we identify tsunami sources that have caused severe devastation in the past and are likely to reoccur in the near future. The earthquake history of Western Indonesia has received much attention since the 2004 Sumatra earthquakes and subsequent events. However, strain rates along a variety of plate boundary segments are just as high in eastern Indonesia where the earthquake history has not been investigated. Due to the rapid population growth in this region it is essential and urgent to evaluate its earthquake and tsunami hazards. Arthur Wichmann's 'Earthquakes of the Indian Archipelago' shows that there were 30 significant earthquakes and 29 tsunami between 1629 to 1877. One of the largest and best documented is the great earthquake and tsunami effecting the Banda islands on 1 August, 1629. It caused severe damage from a 15 m tsunami that arrived at the Banda Islands about a half hour after the earthquake. The earthquake was also recorded 230 km away in Ambon, but no tsunami is mentioned. This event was followed by at least 9 years of aftershocks. The combination of these observations indicates that the earthquake was most likely a mega-thrust event. We use a numerical simulation of the tsunami to locate the potential sources of the 1629 mega-thrust event and evaluate the tsunami hazard in Eastern Indonesia. The numerical simulation was tested to establish the tsunami run-up amplification factor for this region by tsunami simulations of the 1992 Flores Island (Hidayat et al., 1995) and 2006 Java (Katoet al., 2007) earthquake events. The results yield a tsunami run-up amplification factor of 1.5 and 3, respectively. However, the Java earthquake is a unique case of slow rupture that was hardly felt. The fault parameters of recent earthquakes in the Banda region are used for the models. The modeling narrows the possibilities of mega-thrust events the size of the one in 1629 to the Seram and Timor Troughs. For the Seram Trough source a Mw 8.8 produces run-up heights in the Banda Islands of 15.5 m with an arrival time of 17 minuets. For a Timor Trough earthquake near the Tanimbar Islands a Mw 9.2 is needed to produce a 15 m run-up height with an arrival time of 25 minuets. The main problem with the Timor Trough source is that it predicts run-up heights in Ambon of 10 m, which would likely have been recorded. Therefore, we conclude that the most likely source of the 1629 mega-thrust earthquake is the Seram Trough. No large earthquakes are reported along the Seram Trough for over 200 years although high rates of strain are measured across it. This study suggests that the earthquake triggers from this fault zone could be extremely devastating to Eastern Indonesia. We strive to raise the awareness to the local government to not underestimate the natural hazard of this region based on lessons learned from the 2004 Sumatra and 2011 Tohoku tsunamigenic mega-thrust earthquakes.
Building a risk-targeted regional seismic hazard model for South-East Asia
NASA Astrophysics Data System (ADS)
Woessner, J.; Nyst, M.; Seyhan, E.
2015-12-01
The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.
Modeling fast and slow earthquakes at various scales
IDE, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138
Modeling fast and slow earthquakes at various scales.
Ide, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.
NASA Astrophysics Data System (ADS)
Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said
2010-05-01
The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.
NASA Astrophysics Data System (ADS)
Grevemeyer, I.; Arroyo, I. G.
2015-12-01
Earthquake source locations are generally routinely constrained using a global 1-D Earth model. However, the source location might be associated with large uncertainties. This is definitively the case for earthquakes occurring at active continental margins were thin oceanic crust subducts below thick continental crust and hence large lateral changes in crustal thickness occur as a function of distance to the deep-sea trench. Here, we conducted a case study of the 2002 Mw 6.4 Osa thrust earthquake in Costa Rica that was followed by an aftershock sequence. Initial relocations indicated that the main shock occurred fairly trenchward of most large earthquakes along the Middle America Trench off central Costa Rica. The earthquake sequence occurred while a temporary network of ocean-bottom-hydrophones and land stations 80 km to the northwest were deployed. By adding readings from permanent Costa Rican stations, we obtain uncommon P wave coverage of a large subduction zone earthquake. We relocated this catalog using a nonlinear probabilistic approach using a 1-D and two 3-D P-wave velocity models. The 3-D model was either derived from 3-D tomography based on onshore stations and a priori model based on seismic refraction data. All epicentres occurred close to the trench axis, but depth estimates vary by several tens of kilometres. Based on the epicentres and constraints from seismic reflection data the main shock occurred 25 km from the trench and probably along the plate interface at 5-10 km depth. The source location that agreed best with the geology was based on the 3-D velocity model derived from a priori data. Aftershocks propagated downdip to the area of a 1999 Mw 6.9 sequence and partially overlapped it. The results indicate that underthrusting of the young and buoyant Cocos Ridge has created conditions for interpolate seismogenesis shallower and closer to the trench axis than elsewhere along the central Costa Rica margin.
NASA Astrophysics Data System (ADS)
Gok, R.; Kalafat, D.; Hutchings, L.
2003-12-01
We analyze over 3,500 aftershocks recorded by several seismic networks during the 1999 Marmara, Turkey earthquakes. The analysis provides source parameters of the aftershocks, a three-dimensional velocity structure from tomographic inversion, an input three-dimensional velocity model for a finite difference wave propagation code (E3D, Larsen 1998), and records available for use as empirical Green's functions. Ultimately our goal is to model the 1999 earthquakes from DC to 25 Hz and study fault rupture mechanics and kinematic rupture models. We performed the simultaneous inversion for hypocenter locations and three-dimensional P- and S- wave velocity structure of Marmara Region using SIMULPS14 along with 2,500 events with more than eight P- readings and an azimuthal gap of less than 180\\deg. The resolution of calculated velocity structure is better in the eastern Marmara than the western Marmara region due to the dense ray coverage. We used the obtained velocity structure as input into the finite difference algorithm and validated the model by using M < 4 earthquakes as point sources and matching long period waveforms (f < 0.5 Hz). We also obtained Mo, fc and individual station kappa values for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquakes (M < 4.0) to obtain empirical Green's function (EGF) for the higher frequency range of ground motion synthesis (0.5 < f > 25 Hz). We additionally obtained the source scaling relation (energy-moment) of these aftershocks. We have generated several scenarios constrained by a priori knowledge of the Izmit and Duzce rupture parameters to validate our prediction capability.
NASA Astrophysics Data System (ADS)
Tsuboi, S.; Nakamura, T.; Miyoshi, T.
2015-12-01
May 30, 2015 Bonin Islands, Japan earthquake (Mw 7.8, depth 679.9km GCMT) was one of the deepest earthquakes ever recorded. We apply the waveform inversion technique (Kikuchi & Kanamori, 1991) to obtain slip distribution in the source fault of this earthquake in the same manner as our previous work (Nakamura et al., 2010). We use 60 broadband seismograms of IRIS GSN seismic stations with epicentral distance between 30 and 90 degrees. The broadband original data are integrated into ground displacement and band-pass filtered in the frequency band 0.002-1 Hz. We use the velocity structure model IASP91 to calculate the wavefield near source and stations. We assume that the fault is squared with the length 50 km. We obtain source rupture model for both nodal planes with high dip angle (74 degree) and low dip angle (26 degree) and compare the synthetic seismograms with the observations to determine which source rupture model would explain the observations better. We calculate broadband synthetic seismograms with these source propagation models using the spectral-element method (Komatitsch & Tromp, 2001). We use new Earth Simulator system in JAMSTEC to compute synthetic seismograms using the spectral-element method. The simulations are performed on 7,776 processors, which require 1,944 nodes of the Earth Simulator. On this number of nodes, a simulation of 50 minutes of wave propagation accurate at periods of 3.8 seconds and longer requires about 5 hours of CPU time. Comparisons of the synthetic waveforms with the observation at teleseismic stations show that the arrival time of pP wave calculated for depth 679km matches well with the observation, which demonstrates that the earthquake really happened below the 660 km discontinuity. In our present forward simulations, the source rupture model with the low-angle fault dipping is likely to better explain the observations.
Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr
Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.
Heterogeneity of direct aftershock productivity of the main shock rupture
NASA Astrophysics Data System (ADS)
Guo, Yicun; Zhuang, Jiancang; Hirata, Naoshi; Zhou, Shiyong
2017-07-01
The epidemic type aftershock sequence (ETAS) model is widely used to describe and analyze the clustering behavior of seismicity. Instead of regarding large earthquakes as point sources, the finite-source ETAS model treats them as ruptures that extend in space. Each earthquake rupture consists of many patches, and each patch triggers its own aftershocks isotropically. We design an iterative algorithm to invert the unobserved fault geometry based on the stochastic reconstruction method. This model is applied to analyze the Japan Meteorological Agency (JMA) catalog during 1964-2014. We take six great earthquakes with magnitudes >7.5 after 1980 as finite sources and reconstruct the aftershock productivity patterns on each rupture surface. Comparing results from the point-source ETAS model, we find the following: (1) the finite-source model improves the data fitting; (2) direct aftershock productivity is heterogeneous on the rupture plane; (3) the triggering abilities of M5.4+ events are enhanced; (4) the background rate is higher in the off-fault region and lower in the on-fault region for the Tohoku earthquake, while high probabilities of direct aftershocks distribute all over the source region in the modified model; (5) the triggering abilities of five main shocks become 2-6 times higher after taking the rupture geometries into consideration; and (6) the trends of the cumulative background rate are similar in both models, indicating the same levels of detection ability for seismicity anomalies. Moreover, correlations between aftershock productivity and slip distributions imply that aftershocks within rupture faults are adjustments to coseismic stress changes due to slip heterogeneity.
Recent Achievements of the Collaboratory for the Study of Earthquake Predictability
NASA Astrophysics Data System (ADS)
Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.
2016-12-01
The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as they develop their forecast models. We also discuss how CSEP procedures are being adapted to intensity and ground motion prediction experiments as well as hazard model testing.
NASA Astrophysics Data System (ADS)
Gallovič, F.
2017-09-01
Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.
A New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.
2017-12-01
We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.
Spatial and Temporal Stress Drop Variations of the 2011 Tohoku Earthquake Sequence
NASA Astrophysics Data System (ADS)
Miyake, H.
2013-12-01
The 2011 Tohoku earthquake sequence consists of foreshocks, mainshock, aftershocks, and repeating earthquakes. To quantify spatial and temporal stress drop variations is important for understanding M9-class megathrust earthquakes. Variability and spatial and temporal pattern of stress drop is a basic information for rupture dynamics as well as useful to source modeling. As pointed in the ground motion prediction equations by Campbell and Bozorgnia [2008, Earthquake Spectra], mainshock-aftershock pairs often provide significant decrease of stress drop. We here focus strong motion records before and after the Tohoku earthquake, and analyze source spectral ratios considering azimuth- and distance dependency [Miyake et al., 2001, GRL]. Due to the limitation of station locations on land, spatial and temporal stress drop variations are estimated by adjusting shifts from the omega-squared source spectral model. The adjustment is based on the stochastic Green's function simulations of source spectra considering azimuth- and distance dependency. We assumed the same Green's functions for event pairs for each station, both the propagation path and site amplification effects are cancelled out. Precise studies of spatial and temporal stress drop variations have been performed [e.g., Allmann and Shearer, 2007, JGR], this study targets the relations between stress drop vs. progression of slow slip prior to the Tohoku earthquake by Kato et al. [2012, Science] and plate structures. Acknowledgement: This study is partly supported by ERI Joint Research (2013-B-05). We used the JMA unified earthquake catalogue and K-NET, KiK-net, and F-net data provided by NIED.
Source model for the Mw 6.7, 23 October 2002, Nenana Mountain Earthquake (Alaska) from InSAR
Wright, Tim J.; Lu, Z.; Wicks, Charles
2003-01-01
The 23 October 2002 Nenana Mountain Earthquake (Mw ∼ 6.7) occurred on the Denali Fault (Alaska), to the west of the Mw ∼ 7.9 Denali Earthquake that ruptured the same fault 11 days later. We used 6 interferograms, constructed using radar images from the Canadian Radarsat-1 and European ERS-2 satellites, to determine the coseismic surface deformation and a source model. Data were acquired on ascending and descending satellite passes, with incidence angles between 23 and 45 degrees, and time intervals of 72 days or less. Modeling the event as dislocations in an elastic half space suggests that there was nearly 0.9 m of right-lateral strike-slip motion at depth, on a near-vertical fault, and that the maximum slip in the top 4 km of crust was less than 0.2 m. The Nenana Mountain Earthquake increased the Coulomb stress at the future hypocenter of the 3 November 2002, Denali Earthquake by 30–60 kPa.
Source model for the Mw 6.7, 23 October 2002, Nenana Mountain Earthquake (Alaska) from InSAR
Wright, Tim J.; Lu, Zhong; Wicks, Chuck
2003-01-01
The 23 October 2002 Nenana Mountain Earthquake (Mw ∼ 6.7) occurred on the Denali Fault (Alaska), to the west of the Mw ∼ 7.9 Denali Earthquake that ruptured the same fault 11 days later. We used 6 interferograms, constructed using radar images from the Canadian Radarsat-1 and European ERS-2 satellites, to determine the coseismic surface deformation and a source model. Data were acquired on ascending and descending satellite passes, with incidence angles between 23 and 45 degrees, and time intervals of 72 days or less. Modeling the event as dislocations in an elastic half space suggests that there was nearly 0.9 m of right-lateral strike-slip motion at depth, on a near-vertical fault, and that the maximum slip in the top 4 km of crust was less than 0.2 m. The Nenana Mountain Earthquake increased the Coulomb stress at the future hypocenter of the 3 November 2002, Denali Earthquake by 30–60 kPa.
NASA Astrophysics Data System (ADS)
WANG, X.; Wei, S.; Bradley, K. E.
2017-12-01
Global earthquake catalogs provide important first-order constraints on the geometries of active faults. However, the accuracies of both locations and focal mechanisms in these catalogs are typically insufficient to resolve detailed fault geometries. This issue is particularly critical in subduction zones, where most great earthquakes occur. The Slab 1.0 model (Hayes et al. 2012), which was derived from global earthquake catalogs, has smooth fault geometries, and cannot adequately address local structural complexities that are critical for understanding earthquake rupture patterns, coseismic slip distributions, and geodetically monitored interseismic coupling. In this study, we conduct careful relocation and waveform modeling of earthquake source parameters to reveal fault geometries in greater detail. We take advantage of global data and conduct broadband waveform modeling for medium size earthquakes (M>4.5) to refine their source parameters, which include locations and fault plane solutions. The refined source parameters can greatly improve the imaging of fault geometry (e.g., Wang et al., 2017). We apply these approaches to earthquakes recorded since 1990 in the Mentawai region offshore of central Sumatra. Our results indicate that the uncertainty of the horizontal location, depth and dip angle estimation are as small as 5 km, 2 km and 5 degrees, respectively. The refined catalog shows that the 2005 and 2009 "back-thrust" sequences in Mentawai region actually occurred on a steeply landward-dipping fault, contradicting previous studies that inferred a seaward-dipping backthrust. We interpret these earthquakes as `unsticking' of the Sumatran accretionary wedge along a backstop fault that separates accreted material of the wedge from the strong Sunda lithosphere, or reactivation of an old normal fault buried beneath the forearc basin. We also find that the seismicity on the Sunda megathrust deviates in location from Slab 1.0 by up to 7 km, with along strike variation. The refined megathrust geometry will improve our understanding of the tectonic setting in this region, and place further constraints on rupture processes of the hazardous megathrust.
NASA Astrophysics Data System (ADS)
Trugman, Daniel T.; Shearer, Peter M.
2017-04-01
Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.
Failure of self-similarity for large (Mw > 81/4) earthquakes.
Hartzell, S.H.; Heaton, T.H.
1988-01-01
Compares teleseismic P-wave records for earthquakes in the magnitude range from 6.0-9.5 with synthetics for a self-similar, omega 2 source model and conclude that the energy radiated by very large earthquakes (Mw > 81/4) is not self-similar to that radiated from smaller earthquakes (Mw < 81/4). Furthermore, in the period band from 2 sec to several tens of seconds, it is concluded that large subduction earthquakes have an average spectral decay rate of omega -1.5. This spectral decay rate is consistent with a previously noted tendency of the omega 2 model to overestimate Ms for large earthquakes.-Authors
NASA Astrophysics Data System (ADS)
Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.
2013-05-01
We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop, radiated energy, fracture energy, radiation efficiency, rupture velocity and moment magnitude, respectively. Mw6.5 intraslab Zumpango earthquake location, stations location and tectonic setting in central Mexico
Open Source Tools for Seismicity Analysis
NASA Astrophysics Data System (ADS)
Powers, P.
2010-12-01
The spatio-temporal analysis of seismicity plays an important role in earthquake forecasting and is integral to research on earthquake interactions and triggering. For instance, the third version of the Uniform California Earthquake Rupture Forecast (UCERF), currently under development, will use Epidemic Type Aftershock Sequences (ETAS) as a model for earthquake triggering. UCERF will be a "living" model and therefore requires robust, tested, and well-documented ETAS algorithms to ensure transparency and reproducibility. Likewise, as earthquake aftershock sequences unfold, real-time access to high quality hypocenter data makes it possible to monitor the temporal variability of statistical properties such as the parameters of the Omori Law and the Gutenberg Richter b-value. Such statistical properties are valuable as they provide a measure of how much a particular sequence deviates from expected behavior and can be used when assigning probabilities of aftershock occurrence. To address these demands and provide public access to standard methods employed in statistical seismology, we present well-documented, open-source JavaScript and Java software libraries for the on- and off-line analysis of seismicity. The Javascript classes facilitate web-based asynchronous access to earthquake catalog data and provide a framework for in-browser display, analysis, and manipulation of catalog statistics; implementations of this framework will be made available on the USGS Earthquake Hazards website. The Java classes, in addition to providing tools for seismicity analysis, provide tools for modeling seismicity and generating synthetic catalogs. These tools are extensible and will be released as part of the open-source OpenSHA Commons library.
Hartzell, S.; Iida, M.
1990-01-01
Strong motion records for the Whittier Narrows earthquake are inverted to obtain the history of slip. Both constant rupture velocity models and variable rupture velocity models are considered. The results show a complex rupture process within a relatively small source volume, with at least four separate concentrations of slip. Two sources are associated with the hypocenter, the larger having a slip of 55-90 cm, depending on the rupture model. These sources have a radius of approximately 2-3 km and are ringed by a region of reduced slip. The aftershocks fall within this low slip annulus. Other sources with slips from 40 to 70 cm each ring the central source region and the aftershock pattern. All the sources are predominantly thrust, although some minor right-lateral strike-slip motion is seen. The overall dimensions of the Whittier earthquake from the strong motion inversions is 10 km long (along the strike) and 6 km wide (down the dip). The preferred dip is 30?? and the preferred average rupture velocity is 2.5 km/s. Moment estimates range from 7.4 to 10.0 ?? 1024 dyn cm, depending on the rupture model. -Authors
A seismoacoustic study of the 2011 January 3 Circleville earthquake
NASA Astrophysics Data System (ADS)
Arrowsmith, Stephen J.; Burlacu, Relu; Pankow, Kristine; Stump, Brian; Stead, Richard; Whitaker, Rod; Hayward, Chris
2012-05-01
We report on a unique set of infrasound observations from a single earthquake, the 2011 January 3 Circleville earthquake (Mw 4.7, depth of 8 km), which was recorded by nine infrasound arrays in Utah. Based on an analysis of the signal arrival times and backazimuths at each array, we find that the infrasound arrivals at six arrays can be associated to the same source and that the source location is consistent with the earthquake epicentre. Results of propagation modelling indicate that the lack of associated arrivals at the remaining three arrays is due to path effects. Based on these findings we form the working hypothesis that the infrasound is generated by body waves causing the epicentral region to pump the atmosphere, akin to a baffled piston. To test this hypothesis, we have developed a numerical seismoacoustic model to simulate the generation of epicentral infrasound from earthquakes. We model the generation of seismic waves using a 3-D finite difference algorithm that accounts for the earthquake moment tensor, source time function, depth and local geology. The resultant acceleration-time histories on a 2-D grid at the surface then provide the initial conditions for modelling the near-field infrasonic pressure wave using the Rayleigh integral. Finally, we propagate the near-field source pressure through the Ground-to-Space atmospheric model using a time-domain Parabolic Equation technique. By comparing the resultant predictions with the six epicentral infrasound observations from the 2011 January 3, Circleville earthquake, we show that the observations agree well with our predictions. The predicted and observed amplitudes are within a factor of 2 (on average, the synthetic amplitudes are a factor of 1.6 larger than the observed amplitudes). In addition, arrivals are predicted at all six arrays where signals are observed, and importantly not predicted at the remaining three arrays. Durations are typically predicted to within a factor of 2, and in some cases much better. These results suggest that measured infrasound from the Circleville earthquake is consistent with the generation of infrasound from body waves in the epicentral region.
NASA Astrophysics Data System (ADS)
Somei, K.; Asano, K.; Iwata, T.; Miyakoshi, K.
2012-12-01
After the 1995 Kobe earthquake, many M7-class inland earthquakes occurred in Japan. Some of those events (e.g., the 2004 Chuetsu earthquake) occurred in a tectonic zone which is characterized as a high strain rate zone by the GPS observation (Sagiya et al., 2000) or dense distribution of active faults. That belt-like zone along the coast in Japan Sea side of Tohoku and Chubu districts, and north of Kinki district, is called as the Niigata-Kobe tectonic zone (NKTZ, Sagiya et al, 2000). We investigate seismic scaling relationship for recent inland crustal earthquake sequences in Japan and compare source characteristics between events occurring inside and outside of NKTZ. We used S-wave coda part for estimating source spectra. Source spectral ratio is obtained by S-wave coda spectral ratio between the records of large and small events occurring close to each other from nation-wide strong motion network (K-NET and KiK-net) and broad-band seismic network (F-net) to remove propagation-path and site effects. We carefully examined the commonality of the decay of coda envelopes between event-pair records and modeled the observed spectral ratio by the source spectral ratio function with assuming omega-square source model for large and small events. We estimated the corner frequencies and seismic moment (ratio) from those modeled spectral ratio function. We determined Brune's stress drops of 356 events (Mw: 3.1-6.9) in ten earthquake sequences occurring in NKTZ and six sequences occurring outside of NKTZ. Most of source spectra obey omega-square source spectra. There is no obvious systematic difference between stress drops of events in NKTZ zone and others. We may conclude that the systematic tendency of seismic source scaling of the events occurred inside and outside of NKTZ does not exist and the average source scaling relationship can be effective for inland crustal earthquakes. Acknowledgements: Waveform data were provided from K-NET, KiK-net and F-net operated by National Research Institute for Earth Science and Disaster Prevention Japan. This study is supported by Multidisciplinary research project for Niigata-Kobe tectonic zone promoted by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.
The energy release in earthquakes, and subduction zone seismicity and stress in slabs. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Vassiliou, M. S.
1983-01-01
Energy release in earthquakes is discussed. Dynamic energy from source time function, a simplified procedure for modeling deep focus events, static energy estimates, near source energy studies, and energy and magnitude are addressed. Subduction zone seismicity and stress in slabs are also discussed.
NASA Astrophysics Data System (ADS)
Salaree, Amir; Okal, Emile A.
2018-04-01
We present a seismological and hydrodynamic investigation of the earthquake of 13 April 1923 at Ust'-Kamchatsk, Northern Kamchatka, which generated a more powerful and damaging tsunami than the larger event of 03 February 1923, thus qualifying as a so-called "tsunami earthquake". On the basis of modern relocations, we suggest that it took place outside the fault area of the mainshock, across the oblique Pacific-North America plate boundary, a model confirmed by a limited dataset of mantle waves, which also confirms the slow nature of the source, characteristic of tsunami earthquakes. However, numerical simulations for a number of legitimate seismic models fail to reproduce the sharply peaked distribution of tsunami wave amplitudes reported in the literature. By contrast, we can reproduce the distribution of reported wave amplitudes using an underwater landslide as a source of the tsunami, itself triggered by the earthquake inside the Kamchatskiy Bight.
Earthquake source properties from pseudotachylite
Beeler, Nicholas M.; Di Toro, Giulio; Nielsen, Stefan
2016-01-01
The motions radiated from an earthquake contain information that can be interpreted as displacements within the source and therefore related to stress drop. Except in a few notable cases, the source displacements can neither be easily related to the absolute stress level or fault strength, nor attributed to a particular physical mechanism. In contrast paleo-earthquakes recorded by exhumed pseudotachylite have a known dynamic mechanism whose properties constrain the co-seismic fault strength. Pseudotachylite can also be used to directly address a longstanding discrepancy between seismologically measured static stress drops, which are typically a few MPa, and much larger dynamic stress drops expected from thermal weakening during localized slip at seismic speeds in crystalline rock [Sibson, 1973; McKenzie and Brune, 1969; Lachenbruch, 1980; Mase and Smith, 1986; Rice, 2006] as have been observed recently in laboratory experiments at high slip rates [Di Toro et al., 2006a]. This note places pseudotachylite-derived estimates of fault strength and inferred stress levels within the context and broader bounds of naturally observed earthquake source parameters: apparent stress, stress drop, and overshoot, including consideration of roughness of the fault surface, off-fault damage, fracture energy, and the 'strength excess'. The analysis, which assumes stress drop is related to corner frequency by the Madariaga [1976] source model, is restricted to the intermediate sized earthquakes of the Gole Larghe fault zone in the Italian Alps where the dynamic shear strength is well-constrained by field and laboratory measurements. We find that radiated energy exceeds the shear-generated heat and that the maximum strength excess is ~16 MPa. More generally these events have inferred earthquake source parameters that are rate, for instance a few percent of the global earthquake population has stress drops as large, unless: fracture energy is routinely greater than existing models allow, pseudotachylite is not representative of the shear strength during the earthquake that generated it, or unless the strength excess is larger than we have allowed.
Garcia, D.; Mah, R.T.; Johnson, K.L.; Hearne, M.G.; Marano, K.D.; Lin, K.-W.; Wald, D.J.
2012-01-01
We introduce the second version of the U.S. Geological Survey ShakeMap Atlas, which is an openly-available compilation of nearly 8,000 ShakeMaps of the most significant global earthquakes between 1973 and 2011. This revision of the Atlas includes: (1) a new version of the ShakeMap software that improves data usage and uncertainty estimations; (2) an updated earthquake source catalogue that includes regional locations and finite fault models; (3) a refined strategy to select prediction and conversion equations based on a new seismotectonic regionalization scheme; and (4) vastly more macroseismic intensity and ground-motion data from regional agencies All these changes make the new Atlas a self-consistent, calibrated ShakeMap catalogue that constitutes an invaluable resource for investigating near-source strong ground-motion, as well as for seismic hazard, scenario, risk, and loss-model development. To this end, the Atlas will provide a hazard base layer for PAGER loss calibration and for the Earthquake Consequences Database within the Global Earthquake Model initiative.
On the scale dependence of earthquake stress drop
NASA Astrophysics Data System (ADS)
Cocco, Massimo; Tinti, Elisa; Cirella, Antonella
2016-10-01
We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.
Moment Tensor Inversion of the 1998 Aiquile Earthquake Using Long-period surface waves
NASA Astrophysics Data System (ADS)
Wang, H.
2016-12-01
On 22nd May 1998 at 04:49(GMT), an earthquake of magnitude Mw = 6.6 struck the Aiquile region of Bolivia, causing 105 deaths and significant damage to the nearby towns of Hoyadas and Pampa Grande. This was the largest shallow earthquake (15 km depth) in Bolivia in over 50 years, and was felt as far Sucre, approximately 100 km away. In this report, a centroid moment tensor (CMT) inversion is carried using body waves and surface waves from 1998 Aiquile earthquake with 1-D and 3-D earth models to obtain the source model parameters and moment tensor, which are the values will be subsequently compared against the Global Centroid Moment Tensor Catalog(GCMT). Also, the excitation kernels could be gained and synthetic data can be created with different earth models. The two method for calculating synthetic seismograms are SPECFEM3D Globe which is based on shear wave mantle model S40RTS and crustal model CRUST 2.0, and AxiSEM which is based on PREM 1-D earth Model. Within the report, the theory behind the CMT inversion was explained and the source parameters gained from the inversion can be used to reveal the tectonics of the source of this earthquake, these information could be helpful in assessing seismic hazard and overall tectonic regime of this region. Furthermore, results of synthetic seismograms and the solution of inversion are going to be used to assess two models.
NASA Astrophysics Data System (ADS)
Zheng, Ao; Wang, Mingfeng; Yu, Xiangwei; Zhang, Wenbo
2018-03-01
On 2016 November 13, an Mw 7.8 earthquake occurred in the northeast of the South Island of New Zealand near Kaikoura. The earthquake caused severe damages and great impacts on local nature and society. Referring to the tectonic environment and defined active faults, the field investigation and geodetic evidence reveal that at least 12 fault sections ruptured in the earthquake, and the focal mechanism is one of the most complicated in historical earthquakes. On account of the complexity of the source rupture, we propose a multisegment fault model based on the distribution of surface ruptures and active tectonics. We derive the source rupture process of the earthquake using the kinematic waveform inversion method with the multisegment fault model from strong-motion data of 21 stations (0.05-0.35 Hz). The inversion result suggests the rupture initiates in the epicentral area near the Humps fault, and then propagates northeastward along several faults, until the offshore Needles fault. The Mw 7.8 event is a mixture of right-lateral strike and reverse slip, and the maximum slip is approximately 19 m. The synthetic waveforms reproduce the characteristics of the observed ones well. In addition, we synthesize the coseismic offsets distribution of the ruptured region from the slips of upper subfaults in the fault model, which is roughly consistent with the surface breaks observed in the field survey.
NASA Astrophysics Data System (ADS)
Koketsu, K.; Ikegami, Y.; Kimura, T.; Miyake, H.
2006-12-01
Large earthquakes at shallow depths can excite long-period ground motions affecting large-scale structures in distant sedimentary basins. For example, the 1985 Michoacan, Mexico, earthquake caused 20,000 fatalities in Mexico City at an epicentral distance of 400 km, and the 2003 Tokachi-oki, Japan, earthquake damaged oil tanks in the Yufutsu basin 250 km away (Koketsu et al., 2005). Similar long-range effects were also observed during the 2004 off Kii-peninsula earthquake (Miyake and Koketsu, 2005). In order to examine whether the 1906 San Francisco earthquake and the Los Angeles (LA) basin are in such a case or not, we simulate long- period ground motions in almost whole California caused by the earthquake using the finite element method (FEM) with a voxel mesh (Koketsu et al., 2004). The LA basin is located at a distance of about 600 km from the source region of the 1906 San Francisco earthquake. The 3-D heterogeneous velocity structure model for the ground motion simulation is constructed based on the SCEC Unified Velocity Model for southern California and USGS Bay Area Velocity Model for northern California. The source model of the earthquake is constructed according to Wald et al. (1993). Since we use a mesh with intervals of 500m, the voxel FEM can compute seismic waves with frequencies lower than 0.2 Hz. Although ground motions in the south of the source region are smaller than those in the north because of the rupture directivity effect, we can see fairly developed long- period ground motions in the LA basin in the preliminary result of Kimura et al. (2006). However, we obtained only 8cm/s and 25km/s for PGV and peak velocity response spectrum in the LA basin. We modeled the velocity structure up to a depth of only 20km neglecting the Moho reflections, and we did not include layers with Vs smaller than 1.0 km/s. In this study, we include deeper parts and use a more accurate velocity structure model with low-velocity sediments of Vs smaller than 1.0 km/s.
Localizing Submarine Earthquakes by Listening to the Water Reverberations
NASA Astrophysics Data System (ADS)
Castillo, J.; Zhan, Z.; Wu, W.
2017-12-01
Mid-Ocean Ridge (MOR) earthquakes generally occur far from any land based station and are of moderate magnitude, making it complicated to detect and in most cases, locate accurately. This limits our understanding of how MOR normal and transform faults move and the manner in which they slip. Different from continental events, seismic records from earthquakes occurring beneath the ocean floor show complex reverberations caused by P-wave energy trapped in the water column that are highly dependent of the source location and the efficiency to which energy propagated to the near-source surface. These later arrivals are commonly considered to be only a nuisance as they might sometimes interfere with the primary arrivals. However, in this study, we take advantage of the wavefield's high sensitivity to small changes in the seafloor topography and the present-day availability of worldwide multi-beam bathymetry to relocate submarine earthquakes by modeling these water column reverberations in teleseismic signals. Using a three-dimensional hybrid method for modeling body wave arrivals, we demonstrate that an accurate hypocentral location of a submarine earthquake (<5 km) can be achieved if the structural complexities near the source region are appropriately accounted for. This presents a novel way of studying earthquake source properties and will serve as a means to explore the influence of physical fault structure on the seismic behavior of transform faults.
Improved Model Fitting for the Empirical Green's Function Approach Using Hierarchical Models
NASA Astrophysics Data System (ADS)
Van Houtte, Chris; Denolle, Marine
2018-04-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study examines a variety of model-fitting methods and shows that the choice of method can explain some of the discrepancy. The preferred method is Bayesian hierarchical modeling, which can reduce bias, better quantify uncertainties, and allow additional effects to be resolved. Two case study earthquakes are examined, the 2016 MW7.1 Kumamoto, Japan earthquake and a MW5.3 aftershock of the 2016 MW7.8 Kaikōura earthquake. By using hierarchical models, the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be retrieved without overfitting the data. Other methods commonly used to calculate corner frequencies may give substantial biases. In particular, if fc was calculated for the Kumamoto earthquake using an ω-square model, the obtained fc could be twice as large as a realistic value.
Tsunami Source Modeling of the 2015 Volcanic Tsunami Earthquake near Torishima, South of Japan
NASA Astrophysics Data System (ADS)
Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H.
2017-12-01
An abnormal earthquake occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The earthquake, which hereafter we call "the 2015 Torishima earthquake," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the earthquake can be regarded as a "tsunami earthquake." In the region, similar tsunami earthquakes were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 earthquake were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For modeling its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the model of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding the physical mechanism of the 2015 Torishima earthquake. First, the estimated large uplift within Smith Caldera implies the earthquake may be related to some volcanic activity of the caldera. Secondly, the modeled ring of subsidence surrounding the caldera suggests that the process may have included notable subsidence, at least on the northeastern side out of the caldera.
NASA Astrophysics Data System (ADS)
Viens, L.; Miyake, H.; Koketsu, K.
2016-12-01
Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.
Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes
NASA Astrophysics Data System (ADS)
Yamada, M.; Mori, J. J.
2009-12-01
Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the 2007 Noto-hanto earthquake, 2008 Iwate-Miyagi earthquake, and 2008 Wenchuan earthquake. The on-going rupture extent can be estimated for all datasets as the rupture propagates. For earthquakes with magnitude about 7.0, the determination of the fault parameters converges to the final geometry within 10 seconds.
Bakun, W.H.
2005-01-01
Japan Meteorological Agency (JMA) intensity assignments IJMA are used to derive intensity attenuation models suitable for estimating the location and an intensity magnitude Mjma for historical earthquakes in Japan. The intensity for shallow crustal earthquakes on Honshu is equal to -1.89 + 1.42MJMA - 0.00887?? h - 1.66log??h, where MJMA is the JMA magnitude, ??h = (??2 + h2)1/2, and ?? and h are epicentral distance and focal depth (km), respectively. Four earthquakes located near the Japan Trench were used to develop a subducting plate intensity attenuation model where intensity is equal to -8.33 + 2.19MJMA -0.00550??h - 1.14 log ?? h. The IJMA assignments for the MJMA7.9 great 1923 Kanto earthquake on the Philippine Sea-Eurasian plate interface are consistent with the subducting plate model; Using the subducting plate model and 226 IJMA IV-VI assignments, the location of the intensity center is 25 km north of the epicenter, Mjma is 7.7, and MJMA is 7.3-8.0 at the 1?? confidence level. Intensity assignments and reported aftershock activity for the enigmatic 11 November 1855 Ansei Edo earthquake are consistent with an MJMA 7.2 Philippine Sea-Eurasian interplate source or Philippine Sea intraslab source at about 30 km depth. If the 1855 earthquake was a Philippine Sea-Eurasian interplate event, the intensity center was adjacent to and downdip of the rupture area of the great 1923 Kanto earthquake, suggesting that the 1855 and 1923 events ruptured adjoining sections of the Philippine Sea-Eurasian plate interface.
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources
NASA Astrophysics Data System (ADS)
Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.
2017-09-01
We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction. We have developed a methodology for synthesizing physics-based broadband ground motion that incorporates the effects of realistic earthquake rupture along specific faults and the actual geology between the source and site.
NASA Astrophysics Data System (ADS)
Rahman, M. Moklesur; Bai, Ling; Khan, Nangyal Ghani; Li, Guohui
2018-02-01
The Himalayan-Tibetan region has a long history of devastating earthquakes with wide-spread casualties and socio-economic damages. Here, we conduct the probabilistic seismic hazard analysis by incorporating the incomplete historical earthquake records along with the instrumental earthquake catalogs for the Himalayan-Tibetan region. Historical earthquake records back to more than 1000 years ago and an updated, homogenized and declustered instrumental earthquake catalog since 1906 are utilized. The essential seismicity parameters, namely, the mean seismicity rate γ, the Gutenberg-Richter b value, and the maximum expected magnitude M max are estimated using the maximum likelihood algorithm assuming the incompleteness of the catalog. To compute the hazard value, three seismogenic source models (smoothed gridded, linear, and areal sources) and two sets of ground motion prediction equations are combined by means of a logic tree on accounting the epistemic uncertainties. The peak ground acceleration (PGA) and spectral acceleration (SA) at 0.2 and 1.0 s are predicted for 2 and 10% probabilities of exceedance over 50 years assuming bedrock condition. The resulting PGA and SA maps show a significant spatio-temporal variation in the hazard values. In general, hazard value is found to be much higher than the previous studies for regions, where great earthquakes have actually occurred. The use of the historical and instrumental earthquake catalogs in combination of multiple seismogenic source models provides better seismic hazard constraints for the Himalayan-Tibetan region.
Crowell, Brendan; Schmidt, David; Bodin, Paul; Vidale, John; Gomberg, Joan S.; Hartog, Renate; Kress, Victor; Melbourne, Tim; Santillian, Marcelo; Minson, Sarah E.; Jamison, Dylan
2016-01-01
A prototype earthquake early warning (EEW) system is currently in development in the Pacific Northwest. We have taken a two‐stage approach to EEW: (1) detection and initial characterization using strong‐motion data with the Earthquake Alarm Systems (ElarmS) seismic early warning package and (2) the triggering of geodetic modeling modules using Global Navigation Satellite Systems data that help provide robust estimates of large‐magnitude earthquakes. In this article we demonstrate the performance of the latter, the Geodetic First Approximation of Size and Time (G‐FAST) geodetic early warning system, using simulated displacements for the 2001Mw 6.8 Nisqually earthquake. We test the timing and performance of the two G‐FAST source characterization modules, peak ground displacement scaling, and Centroid Moment Tensor‐driven finite‐fault‐slip modeling under ideal, latent, noisy, and incomplete data conditions. We show good agreement between source parameters computed by G‐FAST with previously published and postprocessed seismic and geodetic results for all test cases and modeling modules, and we discuss the challenges with integration into the U.S. Geological Survey’s ShakeAlert EEW system.
Lisbon 1755, a multiple-rupture earthquake
NASA Astrophysics Data System (ADS)
Fonseca, J. F. B. D.
2017-12-01
The Lisbon earthquake of 1755 poses a challenge to seismic hazard assessment. Reports pointing to MMI 8 or above at distances of the order of 500km led to magnitude estimates near M9 in classic studies. A refined analysis of the coeval sources lowered the estimates to 8.7 (Johnston, 1998) and 8.5 (Martinez-Solares, 2004). I posit that even these lower magnitude values reflect the combined effect of multiple ruptures. Attempts to identify a single source capable of explaining the damage reports with published ground motion models did not gather consensus and, compounding the challenge, the analysis of tsunami traveltimes has led to disparate source models, sometimes separated by a few hundred kilometers. From this viewpoint, the most credible source would combine a sub-set of the multiple active structures identifiable in SW Iberia. No individual moment magnitude needs to be above M8.1, thus rendering the search for candidate structures less challenging. The possible combinations of active structures should be ranked as a function of their explaining power, for macroseismic intensities and tsunami traveltimes taken together. I argue that the Lisbon 1755 earthquake is an example of a distinct class of intraplate earthquake previously unrecognized, of which the Indian Ocean earthquake of 2012 is the first instrumentally recorded example, showing space and time correlation over scales of the orders of a few hundred km and a few minutes. Other examples may exist in the historical record, such as the M8 1556 Shaanxi earthquake, with an unusually large damage footprint (MMI equal or above 6 in 10 provinces; 830000 fatalities). The ability to trigger seismicity globally, observed after the 2012 Indian Ocean earthquake, may be a characteristic of this type of event: occurrences in Massachussets (M5.9 Cape Ann earthquake on 18/11/1755), Morocco (M6.5 Fez earthquake on 27/11/1755) and Germany (M6.1 Duren earthquake, on 18/02/1756) had in all likelyhood a causal link to the Lisbon earthquake. This may reflect the very long period of surface waves generated by the combined sources as a result of the delays between ruptures. Recognition of this new class of large intraplate earthquakes may pave the way to a better understanding of the mechanisms driving intraplate deformation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrmann, R.B.; Nguyen, B.
Earthquake activity in the New Madrid Seismic Zone had been monitored by regional seismic networks since 1975. During this time period, over 3,700 earthquakes have been located within the region bounded by latitudes 35{degrees}--39{degrees}N and longitudes 87{degrees}--92{degrees}W. Most of these earthquakes occur within a 1.5{degrees} x 2{degrees} zone centered on the Missouri Bootheel. Source parameters of larger earthquakes in the zone and in eastern North America are determined using surface-wave spectral amplitudes and broadband waveforms for the purpose of determining the focal mechanism, source depth and seismic moment. Waveform modeling of broadband data is shown to be a powerful toolmore » in defining these source parameters when used complementary with regional seismic network data, and in addition, in verifying the correctness of previously published focal mechanism solutions.« less
Complex earthquake rupture and local tsunamis
Geist, E.L.
2002-01-01
In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.
NASA Astrophysics Data System (ADS)
Li, Linlin; Switzer, Adam D.; Wang, Yu; Chan, Chung-Han; Qiu, Qiang; Weiss, Robert
2017-04-01
Current tsunami inundation maps are commonly generated using deterministic scenarios, either for real-time forecasting or based on hypothetical "worst-case" events. Such maps are mainly used for emergency response and evacuation planning and do not include the information of return period. However, in practice, probabilistic tsunami inundation maps are required in a wide variety of applications, such as land-use planning, engineer design and for insurance purposes. In this study, we present a method to develop the probabilistic tsunami inundation map using a stochastic earthquake source model. To demonstrate the methodology, we take Macau a coastal city in the South China Sea as an example. Two major advances of this method are: it incorporates the most updated information of seismic tsunamigenic sources along the Manila megathrust; it integrates a stochastic source model into a Monte Carlo-type simulation in which a broad range of slip distribution patterns are generated for large numbers of synthetic earthquake events. When aggregated the large amount of inundation simulation results, we analyze the uncertainties associated with variability of earthquake rupture location and slip distribution. We also explore how tsunami hazard evolves in Macau in the context of sea level rise. Our results suggest Macau faces moderate tsunami risk due to its low-lying elevation, extensive land reclamation, high coastal population and major infrastructure density. Macau consists of four districts: Macau Peninsula, Taipa Island, Coloane island and Cotai strip. Of these Macau Peninsula is the most vulnerable to tsunami due to its low-elevation and exposure to direct waves and refracted waves from the offshore region and reflected waves from mainland. Earthquakes with magnitude larger than Mw8.0 in the northern Manila trench would likely cause hazardous inundation in Macau. Using a stochastic source model, we are able to derive a spread of potential tsunami impacts for earthquakes with the same magnitude. The diversity is caused by both random rupture locations and heterogeneous slip distribution. Adding the sea level rise component, the inundated depth caused by 1 m sea level rise is equivalent to the one caused by 90 percentile of an ensemble of Mw8.4 earthquakes.
NASA Astrophysics Data System (ADS)
Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.
2016-12-01
The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw < 5). In 2015, two local earthquakes - Mw4.5 in 03/21/2015 and Mw4.1 in 08/18/2015 - have been recorded by both the Incorporated Research Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw < 5) and are shallow with focal depths of about 2 to 4 km. Such events are very common in oil/gas reservoirs all over the world, including North America, Europe, and the Middle East. We determined the location and source mechanism of these local earthquakes, with the uncertainties, using a Bayesian inversion method. The triggering stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.
Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.
2015-12-01
The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.
NASA Astrophysics Data System (ADS)
Hébert, H.; Burg, P.-E.; Binet, R.; Lavigne, F.; Allgeyer, S.; Schindelé, F.
2012-12-01
The Mw 7.8 2006 July 17 earthquake off the southern coast of Java, Indonesia, has been responsible for a very large tsunami causing more than 700 casualties. The tsunami has been observed on at least 200 km of coastline in the region of Pangandaran (West Java), with run-up heights from 5 to more than 20 m. Such a large tsunami, with respect to the source magnitude, has been attributed to the slow character of the seismic rupture, defining the event as a so-called tsunami earthquake, but it has also been suggested that the largest run-up heights are actually the result of a second local landslide source. Here we test whether a single slow earthquake source can explain the tsunami run-up, using a combination of new detailed data in the region of the largest run-ups and comparison with modelled run-ups for a range of plausible earthquake source models. Using high-resolution satellite imagery (SPOT 5 and Quickbird), the coastal impact of the tsunami is refined in the surroundings of the high-security Permisan prison on Nusa Kambangan island, where 20 m run-up had been recorded directly after the event. These data confirm the extreme inundation lengths close to the prison, and extend the area of maximum impact further along the Nusa Kambangan island (about 20 km of shoreline), where inundation lengths reach several hundreds of metres, suggesting run-up as high as 10-15 m. Tsunami modelling has been conducted in detail for the high run-up Permisan area (Nusa Kambangan) and the PLTU power plant about 25 km eastwards, where run-up reached only 4-6 m and a video recording of the tsunami arrival is available. For the Permisan prison a high-resolution DEM was built from stereoscopic satellite imagery. The regular basin of the PLTU plant was designed using photographs and direct observations. For the earthquake's mechanism, both static (infinite) and finite (kinematic) ruptures are investigated using two published source models. The models account rather well for the sea level variation at PLTU, showing a better agreement in arrival times with the finite rupture, and predict the Permisan area to be one of the regions where tsunami waves would have focussed. However, the earthquake models that match the data at PTLU do not predict that the wave heights at Permisan are an overall maximum, and do not predict there more than 10 m of the 21 observed. Hence, our results confirm that an additional localized tsunami source off Nusa Kambangan island, such as a submarine landslide, may have increased the tsunami impact for the Permisan site. This reinforces the importance for hazard assessment of further mapping and understanding local potential for submarine sliding, as a tsunami source added to usual earthquake sources.
NASA Astrophysics Data System (ADS)
Kumar, A.; Mitra, S.; Suresh, G.
2014-12-01
The Eastern Himalayan System (east of 88°E) is distinct from the rest of the India-Eurasia continental collision, due to a wider zone of distributed deformation, oblique convergence across two orthogonal plate boundaries and near absence of foreland basin sedimentary strata. To understand the seismotectonics of this region we study the spatial distribution and source mechanism of earthquakes originating within Eastern Himalaya, northeast India and Indo-Burman Convergence Zone (IBCZ). We compute focal mechanism of 32 moderate-to-large earthquakes (mb >=5.4) by modeling teleseismic P- and SH-waveforms, from GDSN stations, using least-squares inversion algorithm; and 7 small-to-moderate earthquakes (3.5<= mb <5.4) by modeling local P- and S-waveforms, from the NorthEast India Telemetered Network, using non-linear grid search algorithm. We also include source mechanisms from previous studies, either computed by waveform inversion or by first motion polarity from analog data. Depth distribution of modeled earthquakes reveal that the seismogenic layer beneath northeast India is ~45km thick. From source mechanisms we observe that moderate earthquakes in northeast India are spatially clustered in five zones with distinct mechanisms: (a) thrust earthquakes within the Eastern Himalayan wedge, on north dipping low angle faults; (b) thrust earthquakes along the northern edge of Shillong Plateau, on high angle south dipping fault; (c) dextral strike-slip earthquakes along Kopili fault zone, between Shillong Plateau and Mikir Hills, extending southeast beneath Naga Fold belts; (d) dextral strike-slip earthquakes within Bengal Basin, immediately south of Shillong Plateau; and (e) deep focus (>50 km) thrust earthquakes within IBCZ. Combining with GPS geodetic observations, it is evident that the N20E convergence between India and Tibet is accommodated as elastic strain both within eastern Himalaya and regions surrounding the Shillong Plateau. We hypothesize that the strike-slip earthquakes south of the Plateau occur on re-activated continental rifts paralleling the Eocene hinge zone. Distribution of earthquake hypocenters across the IBCZ reveal active subduction of the Indian plate beneath Burma micro-plate.
On near-source earthquake triggering
Parsons, T.; Velasco, A.A.
2009-01-01
When one earthquake triggers others nearby, what connects them? Two processes are observed: static stress change from fault offset and dynamic stress changes from passing seismic waves. In the near-source region (r ??? 50 km for M ??? 5 sources) both processes may be operating, and since both mechanisms are expected to raise earthquake rates, it is difficult to isolate them. We thus compare explosions with earthquakes because only earthquakes cause significant static stress changes. We find that large explosions at the Nevada Test Site do not trigger earthquakes at rates comparable to similar magnitude earthquakes. Surface waves are associated with regional and long-range dynamic triggering, but we note that surface waves with low enough frequency to penetrate to depths where most aftershocks of the 1992 M = 5.7 Little Skull Mountain main shock occurred (???12 km) would not have developed significant amplitude within a 50-km radius. We therefore focus on the best candidate phases to cause local dynamic triggering, direct waves that pass through observed near-source aftershock clusters. We examine these phases, which arrived at the nearest (200-270 km) broadband station before the surface wave train and could thus be isolated for study. Direct comparison of spectral amplitudes of presurface wave arrivals shows that M ??? 5 explosions and earthquakes deliver the same peak dynamic stresses into the near-source crust. We conclude that a static stress change model can readily explain observed aftershock patterns, whereas it is difficult to attribute near-source triggering to a dynamic process because of the dearth of aftershocks near large explosions.
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.; Sekiguchi, H.
2011-12-01
We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.
Change-point detection of induced and natural seismicity
NASA Astrophysics Data System (ADS)
Fiedler, B.; Holschneider, M.; Zoeller, G.; Hainzl, S.
2016-12-01
Earthquake rates are influenced by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic sources. While the first two sources can be well modeled due to the fact that the source is known, transient aseismic processes are more difficult to detect. However, the detection of the associated changes of the earthquake activity is of great interest, because it might help to identify natural aseismic deformation patterns (such as slow slip events) and the occurrence of induced seismicity related to human activities. We develop a Bayesian approach to detect change-points in seismicity data which are modeled by Poisson processes. By means of a Likelihood-Ratio-Test, we proof the significance of the change of the intensity. The model is also extended to spatiotemporal data to detect the area of the transient changes. The method is firstly tested for synthetic data and then applied to observational data from central US and the Bardarbunga volcano in Iceland.
NASA Astrophysics Data System (ADS)
Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.
2013-05-01
There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.
NASA Astrophysics Data System (ADS)
O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.
2013-01-01
We describe a method for determining an optimal centroid-moment tensor solution of an earthquake from a set of static displacements measured using a network of Global Positioning System receivers. Using static displacements observed after the 4 April 2010, MW 7.2 El Mayor-Cucapah, Mexico, earthquake, we perform an iterative inversion to obtain the source mechanism and location, which minimize the least-squares difference between data and synthetics. The efficiency of our algorithm for forward modeling static displacements in a layered elastic medium allows the inversion to be performed in real-time on a single processor without the need for precomputed libraries of excitation kernels; we present simulated real-time results for the El Mayor-Cucapah earthquake. The only a priori information that our inversion scheme needs is a crustal model and approximate source location, so the method proposed here may represent an improvement on existing early warning approaches that rely on foreknowledge of fault locations and geometries.
Seismological constraints on the down-dip shape of normal faults
NASA Astrophysics Data System (ADS)
Reynolds, Kirsty; Copley, Alex
2018-04-01
We present a seismological technique for determining the down-dip shape of seismogenic normal faults. Synthetic models of non-planar source geometries reveal the important signals in teleseismic P and SH waveforms that are diagnostic of down-dip curvature. In particular, along-strike SH waveforms are the most sensitive to variations in source geometry, and have significantly more complex and larger-amplitude waveforms for curved source geometries than planar ones. We present the results of our forward-modelling technique for 13 earthquakes. Most continental normal-faulting earthquakes that rupture through the full seismogenic layer are planar and have dips of 30°-60°. There is evidence for faults with a listric shape from some of the earthquakes occurring in two regions; Tibet and East Africa. These ruptures occurred on antithetic faults, or minor faults within the hanging walls of the rifts affected, which may suggest a reason for the down-dip curvature. For these earthquakes, the change in dip across the seismogenic part of the fault plane is ≤30°.
A new Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin
2017-04-01
Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.
NASA Astrophysics Data System (ADS)
Heidarzadeh, Mohammad; Ishibe, Takeo; Harada, Tomoya
2018-04-01
The September 2017 Chiapas (Mexico) normal-faulting intraplate earthquake (M w 8.1) occurred within the Tehuantepec seismic gap offshore Mexico. We constrained the finite-fault slip model of this great earthquake using teleseismic and tsunami observations. First, teleseismic body-wave inversions were conducted for both steep (NP-1) and low-angle (NP-2) nodal planes for rupture velocities (V r) of 1.5-4.0 km/s. Teleseismic inversion guided us to NP-1 as the actual fault plane, but was not conclusive about the best V r. Tsunami simulations also confirmed that NP-1 is favored over NP-2 and guided the V r = 2.5 km/s as the best source model. Our model has a maximum and average slips of 13.1 and 3.7 m, respectively, over a 130 km × 80 km fault plane. Coulomb stress transfer analysis revealed that the probability for the occurrence of a future large thrust interplate earthquake at offshore of the Tehuantepec seismic gap had been increased following the 2017 Chiapas normal-faulting intraplate earthquake.
Combining multiple earthquake models in real time for earthquake early warning
Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.
2017-01-01
The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.
Strong Ground Motion Prediction By Composite Source Model
NASA Astrophysics Data System (ADS)
Burjanek, J.; Irikura, K.; Zahradnik, J.
2003-12-01
A composite source model, incorporating different sized subevents, provides a possible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock). The subevents are distributed randomly over the fault. Each subevent is modeled either as a finite or point source, differences between these choices are shown. The final slip and duration of each subevent is related to its characteristic dimension, using constant stress-drop scaling. Absolute value of subevents' stress drop is free parameter. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally layered crustal model. An estimation of subevents' stress drop is based on fitting empirical attenuation relations for PGA and PGV, as they represent robust information on strong ground motion caused by earthquakes, including both path and source effect. We use the 2000 M6.6 Western Tottori, Japan, earthquake as validation event, providing comparison between predicted and observed waveforms.
Possible Dual Earthquake-Landslide Source of the 13 November 2016 Kaikoura, New Zealand Tsunami
NASA Astrophysics Data System (ADS)
Heidarzadeh, Mohammad; Satake, Kenji
2017-10-01
A complicated earthquake ( M w 7.8) in terms of rupture mechanism occurred in the NE coast of South Island, New Zealand, on 13 November 2016 (UTC) in a complex tectonic setting comprising a transition strike-slip zone between two subduction zones. The earthquake generated a moderate tsunami with zero-to-crest amplitude of 257 cm at the near-field tide gauge station of Kaikoura. Spectral analysis of the tsunami observations showed dual peaks at 3.6-5.7 and 5.7-56 min, which we attribute to the potential landslide and earthquake sources of the tsunami, respectively. Tsunami simulations showed that a source model with slip on an offshore plate-interface fault reproduces the near-field tsunami observation in terms of amplitude, but fails in terms of tsunami period. On the other hand, a source model without offshore slip fails to reproduce the first peak, but the later phases are reproduced well in terms of both amplitude and period. It can be inferred that an offshore source is necessary to be involved, but it needs to be smaller in size than the plate interface slip, which most likely points to a confined submarine landslide source, consistent with the dual-peak tsunami spectrum. We estimated the dimension of the potential submarine landslide at 8-10 km.
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro; Alexander, Nicholas A.; Kongko, Widjo; Muhari, Abdul
2017-12-01
This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0) that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan - including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal-vertical evacuation time maps - has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.
NASA Astrophysics Data System (ADS)
Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Ohsumi, T.; Morikawa, N.; Kawai, S.; Maeda, T.; Matsuyama, H.; Toyama, N.; Kito, T.; Murata, Y.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.; Hakamata, T.
2017-12-01
For the forthcoming large earthquakes along the Sagami Trough where the Philippine Sea Plate is subducting beneath the northeast Japan arc, the Earthquake Research Committee(ERC) /Headquarters for Earthquake Research Promotion, Japanese government (2014a) assessed that M7 and M8 class earthquakes will occur there and defined the possible extent of the earthquake source areas. They assessed 70% and 0% 5% of the occurrence probability within the next 30 years (from Jan. 1, 2014), respectively, for the M7 and M8 class earthquakes. First, we set possible 10 earthquake source areas(ESAs) and 920 ESAs, respectively, for M8 and M7 class earthquakes. Next, we constructed 125 characterized earthquake fault models (CEFMs) and 938 CEFMs, respectively, for M8 and M7 class earthquakes, based on "tsunami receipt" of ERC (2017) (Kitoh et al., 2016, JpGU). All the CEFMs are allowed to have a large slip area for expression of fault slip heterogeneity. For all the CEFMs, we calculate tsunamis by solving a nonlinear long wave equation, using FDM, including runup calculation, over a nesting grid system with a minimum grid size of 50 meters. Finally, we re-distributed the occurrence probability to all CEFMs (Abe et al., 2014, JpGU) and gathered excess probabilities for variable tsunami heights, calculated from all the CEFMs, at every observation point along Pacific coast to get PTHA. We incorporated aleatory uncertainties inherent in tsunami calculation and earthquake fault slip heterogeneity. We considered two kinds of probabilistic hazard models; one is "Present-time hazard model" under an assumption that the earthquake occurrence basically follows a renewal process based on BPT distribution if the latest faulting time was known. The other is "Long-time averaged hazard model" under an assumption that earthquake occurrence follows a stationary Poisson process. We fixed our viewpoint, for example, on the probability that the tsunami height will exceed 3 meters at coastal points in next 30 years (from Jan. 1, 2014). Present-time hazard model showed relatively high possibility over 0.1% along the Boso Peninsula. Long-time averaged hazard model showed highest possibility over 3% along the Boso Peninsula and relatively high possibility over 0.1 % along wide coastal areas on Pacific side from Kii Peninsula to Fukushima prefecture.
Chapter two: Phenomenology of tsunamis II: scaling, event statistics, and inter-event triggering
Geist, Eric L.
2012-01-01
Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R2 ~ 0.4-0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources.
NASA Astrophysics Data System (ADS)
Chao, K.; Gonzalez-Huizar, H.; Tang, V.; Klaeser, R. D.; Mattia, M.; Van der Lee, S.
2017-12-01
Triggered tremor is one type of slow earthquake that activated by teleseismic surfaces waves of large magnitude earthquake. Observations of triggered tremor can help to evaluate the background ambient tremor rate and slow slip events in the surrounding region. The Mw 8.1 Tehuantepec earthquake in Mexico is an ideal tremor-triggering candidate for a global search for triggered tremor. Here, we examine triggered tremor globally following the M8.1 event and model the tremor-triggering potential. We examine 7,000 seismic traces and found a widely spread triggered tremor along the western coast of the North America occur during the surface waves of the Mw 8.1 event. Triggered tremor appeared in the San Jacinto Fault, San Andreas Fault around Parkfield, and Calaveras Fault in California, in Vancouver Island in Cascadia subduction zone, in Queen Charlotte Margin and Eastern Denali Fault in Canada, and in Alaska and Aleutian Arc. In addition, we observe a newly found triggered tremor source in Mt. Etna in Sicily Island, Italy. However, we do not find clear triggered tremor evidences in the tremor active regions in Japan, Taiwan, and in New Zealand. We model tremor-triggering potential at the triggering earthquake source and triggered tremor sources. Our modeling results suggest the source parameters of the M8.1 triggering events and the stress at the triggered fault zone are two critical factors to control tremor-triggering threshold.
NASA Astrophysics Data System (ADS)
Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe
2017-01-01
A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic ground motions obtained using the EGF method agree well with the observed motions in terms of acceleration, velocity, and displacement within the frequency range of 0.3-10 Hz. These findings indicate that the 2016 Kumamoto earthquake is a standard event that follows the scaling relationship of crustal earthquakes in Japan.
Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas
Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles
2016-01-01
Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.
Spatial modeling for estimation of earthquakes economic loss in West Java
NASA Astrophysics Data System (ADS)
Retnowati, Dyah Ayu; Meilano, Irwan; Riqqi, Akhmad; Hanifa, Nuraini Rahma
2017-07-01
Indonesia has a high vulnerability towards earthquakes. The low adaptive capacity could make the earthquake become disaster that should be concerned. That is why risk management should be applied to reduce the impacts, such as estimating the economic loss caused by hazard. The study area of this research is West Java. The main reason of West Java being vulnerable toward earthquake is the existence of active faults. These active faults are Lembang Fault, Cimandiri Fault, Baribis Fault, and also Megathrust subduction zone. This research tries to estimates the value of earthquakes economic loss from some sources in West Java. The economic loss is calculated by using HAZUS method. The components that should be known are hazard (earthquakes), exposure (building), and the vulnerability. Spatial modeling is aimed to build the exposure data and make user get the information easier by showing the distribution map, not only in tabular data. As the result, West Java could have economic loss up to 1,925,122,301,868,140 IDR ± 364,683,058,851,703.00 IDR, which is estimated from six earthquake sources with maximum possibly magnitude. However, the estimation of economic loss value in this research is the worst case earthquakes occurrence which is probably over-estimated.
NASA Astrophysics Data System (ADS)
Tanioka, Yuichiro
2017-04-01
After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami simulation. By assuming that this computed tsunami is a real tsunami and observed at ocean bottom sensors, new tsunami simulation is carried out using the above method. The station distribution (each station is separated by 15 min., about 30 km) observed tsunami waveforms which were actually computed from the source model. Tsunami height distributions are estimated from the above method at 40, 80, and 120 seconds after the origin time of the earthquake. The Near-field Tsunami Inundation forecast method (Gusman et al. 2014) was used to estimate the tsunami inundation along the Sanriku coast. The result shows that the observed tsunami inundation was well explained by those estimated inundation. This also shows that it takes about 10 minutes to estimate the tsunami inundation from the origin time of the earthquake. This new method developed in this paper is very effective for a real-time tsunami forecast.
NASA Astrophysics Data System (ADS)
Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro
2017-08-01
Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.
A combined geodetic and seismic model for the Mw 8.3 2015 Illapel (Chile) earthquake
NASA Astrophysics Data System (ADS)
Simons, M.; Duputel, Z.; Jiang, J.; Liang, C.; Fielding, E. J.; Agram, P. S.; Owen, S. E.; Moore, A. W.; Kanamori, H.; Rivera, L. A.; Riel, B. V.; Ortega, F.
2015-12-01
The 2015 September 16 Mw 8.3 Illapel earthquake occurred on the subduction megathrust offshore of the Chilean coastline between the towns of Valparaiso and Coquimbo. This earthquake is the 3rdevent with Mw>8 to impact coastal Chile in the last 6 years. It occurred north of both the 2010 Mw 8.8 Maule earthquake and the 1985 Mw 8.0 Valparaiso earthquake. While the location of the 2015 earthquake is close to the inferred location of a large earthquake in 1943, comparison of seismograms from the two earthquakes suggests the recent event is not clearly a repeat of the 1943 event. To gain a better understanding of the distribution of coseismic fault slip, we develop a finite fault model that is constrained by a combination of static GPS offsets, Sentinel 1a ascending and descending radar interferograms, tsunami waveform measurements made at selected DART buoys, high rate (1 sample/sec) GPS waveforms and strong motion seismic data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the assumed forward models. At the inherent resolution of the model, the posterior ensemble of purely static models (without using high rate GPS and strong motion data) is characterized by a distribution of slip that reaches as much as 10 m in localized regions, with significant slip apparently reaching the trench or at least very close to the trench. Based on our W-phase point-source estimate, the event duration is approximately 1.7 minutes. We also present a joint kinematic model and describe the relationship of the coseismic model to the spatial distribution of aftershocks and post-seismic slip.
Kirby, Stephen; Scholl, David; von Huene, Roland E.; Wells, Ray
2013-01-01
Tsunami modeling has shown that tsunami sources located along the Alaska Peninsula segment of the Aleutian-Alaska subduction zone have the greatest impacts on southern California shorelines by raising the highest tsunami waves for a given source seismic moment. The most probable sector for a Mw ~ 9 source within this subduction segment is between Kodiak Island and the Shumagin Islands in what we call the Semidi subduction sector; these bounds represent the southwestern limit of the 1964 Mw 9.2 Alaska earthquake rupture and the northeastern edge of the Shumagin sector that recent Global Positioning System (GPS) observations indicate is currently creeping. Geological and geophysical features in the Semidi sector that are thought to be relevant to the potential for large magnitude, long-rupture-runout interplate thrust earthquakes are remarkably similar to those in northeastern Japan, where the destructive Mw 9.1 tsunamigenic earthquake of 11 March 2011 occurred. In this report we propose and justify the selection of a tsunami source seaward of the Alaska Peninsula for use in the Tsunami Scenario that is part of the U.S. Geological Survey (USGS) Science Application for Risk Reduction (SAFRR) Project. This tsunami source should have the potential to raise damaging tsunami waves on the California coast, especially at the ports of Los Angeles and Long Beach. Accordingly, we have summarized and abstracted slip distribution from the source literature on the 2011 event, the best characterized for any subduction earthquake, and applied this synoptic slip distribution to the similar megathrust geometry of the Semidi sector. The resulting slip model has an average slip of 18.6 m and a moment magnitude of Mw = 9.1. The 2011 Tohoku earthquake was not anticipated, despite Japan having the best seismic and geodetic networks in the world and the best historical record in the world over the past 1,500 years. What was lacking was adequate paleogeologic data on prehistoric earthquakes and tsunamis, a data gap that also presently applies to the Alaska Peninsula and the Aleutian Islands. Quantitative appraisal of potential tsunami sources in Alaska requires such investigations.
A GIS-based time-dependent seismic source modeling of Northern Iran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2017-01-01
The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.
NASA Astrophysics Data System (ADS)
Tanioka, Y.; Miranda, G. J. A.; Gusman, A. R.
2017-12-01
Recently, tsunami early warning technique has been improved using tsunami waveforms observed at the ocean bottom pressure gauges such as NOAA DART system or DONET and S-NET systems in Japan. However, for tsunami early warning of near field tsunamis, it is essential to determine appropriate source models using seismological analysis before large tsunamis hit the coast, especially for tsunami earthquakes which generated significantly large tsunamis. In this paper, we develop a technique to determine appropriate source models from which appropriate tsunami inundation along the coast can be numerically computed The technique is tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off Central America. In this study, fault parameters were estimated from the W-phase inversion, then the fault length and width were determined from scaling relationships. At first, the slip amount was calculated from the seismic moment with a constant rigidity of 3.5 x 10**10N/m2. The tsunami numerical simulation was carried out and compared with the observed tsunami. For the 1992 Nicaragua tsunami earthquake, the computed tsunami was much smaller than the observed one. For the 2004 El Astillero earthquake, the computed tsunami was overestimated. In order to solve this problem, we constructed a depth dependent rigidity curve, similar to suggested by Bilek and Lay (1999). The curve with a central depth estimated by the W-phase inversion was used to calculate the slip amount of the fault model. Using those new slip amounts, tsunami numerical simulation was carried out again. Then, the observed tsunami heights, run-up heights, and inundation areas for the 1992 Nicaragua tsunami earthquake were well explained by the computed one. The other tsunamis from the other three earthquakes were also reasonably well explained by the computed ones. Therefore, our technique using a depth dependent rigidity curve is worked to estimate an appropriate fault model which reproduces tsunami heights near the coast in Central America. The technique may be worked in the other subduction zones by finding a depth dependent rigidity curve in that particular subduction zone.
NASA Astrophysics Data System (ADS)
Zha, X.; Dai, Z.; Lu, Z.
2015-12-01
The 2011 Hawthorne earthquake swarm occurred in the central Walker Lane zone, neighboring the border between California and Nevada. The swarm included an Mw 4.4 on April 13, Mw 4.6 on April 17, and Mw 3.9 on April 27. Due to the lack of the near-field seismic instrument, it is difficult to get the accurate source information from the seismic data for these moderate-magnitude events. ENVISAT InSAR observations captured the deformation mainly caused by three events during the 2011 Hawthorne earthquake swarm. The surface traces of three seismogenic sources could be identified according to the local topography and interferogram phase discontinuities. The epicenters could be determined using the interferograms and the relocated earthquake distribution. An apparent earthquake migration is revealed by InSAR observations and the earthquake distribution. Analysis and modeling of InSAR data show that three moderate magnitude earthquakes were produced by slip on three previously unrecognized faults in the central Walker Lane. Two seismogenic sources are northwest striking, right-lateral strike-slip faults with some thrust-slip components, and the other source is a northeast striking, thrust-slip fault with some strike-slip components. The former two faults are roughly parallel to each other, and almost perpendicular to the latter one. This special spatial correlation between three seismogenic faults and nature of seismogenic faults suggest the central Walker Lane has been undergoing southeast-northwest horizontal compressive deformation, consistent with the region crustal movement revealed by GPS measurement. The Coulomb failure stresses on the fault planes were calculated using the preferred slip model and the Coulomb 3.4 software package. For the Mw4.6 earthquake, the Coulomb stress change caused by the Mw4.4 event increased by ~0.1 bar. For the Mw3.9 event, the Coulomb stress change caused by the Mw4.6 earthquake increased by ~1.0 bar. This indicates that the preceding earthquake may trigger the subsequence one. Because no abnormal volcano activity was observed during the 2011 Hawthorne earthquake swarm, we can rule out the volcano activity to induce these events. However, the groundwater change and mining in the epicentral zone may contribute to the 2011 Hawthorne earthquake.
Numerical simulation analysis on Wenchuan seismic strong motion in Hanyuan region
NASA Astrophysics Data System (ADS)
Chen, X.; Gao, M.; Guo, J.; Li, Z.; Li, T.
2015-12-01
69227 deaths, 374643 injured, 17923 people missing, direct economic losses 845.1 billion, and a large number houses collapse were caused by Wenchuan Ms8 earthquake in Sichuan Province on May 12, 2008, how to reproduce characteristics of its strong ground motion and predict its intensity distribution, which have important role to mitigate disaster of similar giant earthquake in the future. Taking Yunnan-Sichuan Province, Wenchuan town, Chengdu city, Chengdu basin and its vicinity as the research area, on the basis of the available three-dimensional velocity structure model and newly topography data results from ChinaArray of Institute of Geophysics, China Earthquake Administration, 2 type complex source rupture process models with the global and local source parameters are established, we simulated the seismic wave propagation of Wenchuan Ms8 earthquake throughout the whole three-dimensional region by the GMS discrete grid finite-difference techniques with Cerjan absorbing boundary conditions, and obtained the seismic intensity distribution in this region through analyzing 50×50 stations data (simulated ground motion output station). The simulated results indicated that: (1)Simulated Wenchuan earthquake ground motion (PGA) response and the main characteristics of the response spectrum are very similar to those of the real Wenchuan earthquake records. (2)Wenchuan earthquake ground motion (PGA) and the response spectra of the Plain are much greater than that of the left Mountain area because of the low velocity of the shallow surface media and the basin effect of the Chengdu basin structure. Simultaneously, (3) the source rupture process (inversion) with far-field P-wave, GPS data and InSAR information and the Longmenshan Front Fault (source rupture process) are taken into consideration in GMS numerical simulation, significantly different waveform and frequency component of the ground motion are obtained, though the strong motion waveform is distinct asymmetric, which should be much more real. It indicated that the Longmenshan Front Fault may be also involved in seismic activity during the long time(several minutes) Wenchuan earthquake process. (4) Simulated earthquake records in Hanyuan region are indeed very strong, which reveals source mechanism is one reason of Hanyuan intensity abnormaly.
NASA Astrophysics Data System (ADS)
Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie
2017-04-01
Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake, the 1994 Northridge earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.
NASA Astrophysics Data System (ADS)
Asano, K.; Iwata, T.
2014-12-01
After the 2011 Tohoku earthquake in Japan (Mw9.0), many papers on the source model of this mega subduction earthquake have been published. From our study on the modeling of strong motion waveforms in the period 0.1-10s, four isolated strong motion generation areas (SMGAs) were identified in the area deeper than 25 km (Asano and Iwata, 2012). The locations of these SMGAs were found to correspond to the asperities of M7-class events in 1930's. However, many studies on kinematic rupture modeling using seismic, geodetic and tsunami data revealed that the existence of the large slip area from the trench to the hypocenter (e.g., Fujii et al., 2011; Koketsu et al., 2011; Shao et al., 2011; Suzuki et al., 2011). That is, the excitation of seismic wave is spatially different in long and short period ranges as is already discussed by Lay et al.(2012) and related studies. The Tohoku earthquake raised a new issue we have to solve on the relationship between the strong motion generation and the fault rupture process, and it is an important issue to advance the source modeling for future strong motion prediction. The previous our source model consists of four SMGAs, and observed ground motions in the period range 0.1-10s are explained well by this source model. We tried to extend our source model to explain the observed ground motions in wider period range with a simple assumption referring to the previous our study and the concept of the characterized source model (Irikura and Miyake, 2001, 2011). We obtained a characterized source model, which have four SMGAs in the deep part, one large slip area in the shallow part and background area with low slip. The seismic moment of this source model is equivalent to Mw9.0. The strong ground motions are simulated by the empirical Green's function method (Irikura, 1986). Though the longest period limit is restricted by the SN ratio of the EGF event (Mw~6.0) records, this new source model succeeded to reproduce the observed waveforms and Fourier amplitude spectra in the period range 0.1-50s. The location of this large slip area seems to overlap the source regions of historical events in 1793 and 1897 off Sanriku area. We think the source model for strong motion prediction of Mw9 event could be constructed by the combination of hierarchical multiple asperities or source patches related to histrorical events in this region.
Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities
Duross, Christopher; Olig, Susan; Schwartz, David
2015-01-01
Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.
NASA Astrophysics Data System (ADS)
Pagnoni, Gianluca; Armigliato, Alberto; Tinti, Stefano; Loreto, Maria Filomena; Facchin, Lorenzo
2014-05-01
The earthquake that the 8 September 1905 hit Calabria in southern Italy was the second Italian earthquake for magnitude in the last century. It destroyed many villages along the coast of the Gulf of Sant'Eufemia, caused more than 500 fatalities and has also generated a tsunami with non-destructive effects. The historical reports tell us that the tsunami caused major damage in the villages of Briatico, Bivona, Pizzo and Vibo Marina, located in the south part of the Sant'Eufemia gulf and minor damage to Tropea and to Scalea, this one being village located about 100 km far from the epicenter. Other reports include accounts of fishermen at sea during the tsunami. Further, the tsunami is visible on tide gauge records in Messina, Sicily, in Naples and in Civitavecchia, a harbour located to the north of Rome (Platania, 1907) In spite of the attention devoted by researchers to this case, until now, like for other tsunamigenic Italian earthquakes, the genetic structure of the earthquake is still not identified and debate is still open. In this context, tsunami simulations can provide contributions useful to find the source model more consistent with observational data. This approach was already followed by Piatanesi and Tinti (2002), who carried out numerical simulations of tsunamis from a number of local sources. In the last decade studies on this seismogenic area were int ensified resulting in new estimates for the 1905 earthquake magnitude (7.1 according to the CPTI11 catalogue) and in the suggestion of new source models. By using an improved tsunami simulation model, more accurate bathymetry data, this work tests the source models investigated by Piatanesi and Tinti (2002) and in addition the new fault models proposed by Cucci and Tertulliani (2010) and by Loreto et al. (2013). The simulations of the tsunami are calculated by means of the code, UBO-TSUFD, that solves the linear equations of Navier-Stokes in approximation of shallow water with the finite-difference technique, while the initial conditions are calculated via Okada's formula. The key-result used to test the models against the data is the maximum height of the tsunami calculated close to the shore at a minimum depth of 5m corrected using the values of the initial coseismic field deformation.
NASA Astrophysics Data System (ADS)
Moyer, P. A.; Boettcher, M. S.; McGuire, J. J.; Collins, J. A.
2015-12-01
On Gofar transform fault on the East Pacific Rise (EPR), Mw ~6.0 earthquakes occur every ~5 years and repeatedly rupture the same asperity (rupture patch), while the intervening fault segments (rupture barriers to the largest events) only produce small earthquakes. In 2008, an ocean bottom seismometer (OBS) deployment successfully captured the end of a seismic cycle, including an extensive foreshock sequence localized within a 10 km rupture barrier, the Mw 6.0 mainshock and its aftershocks that occurred in a ~10 km rupture patch, and an earthquake swarm located in a second rupture barrier. Here we investigate whether the inferred variations in frictional behavior along strike affect the rupture processes of 3.0 < M < 4.5 earthquakes by determining source parameters for 100 earthquakes recorded during the OBS deployment.Using waveforms with a 50 Hz sample rate from OBS accelerometers, we calculate stress drop using an omega-squared source model, where the weighted average corner frequency is derived from an empirical Green's function (EGF) method. We obtain seismic moment by fitting the omega-squared source model to the low frequency amplitude of individual spectra and account for attenuation using Q obtained from a velocity model through the foreshock zone. To ensure well-constrained corner frequencies, we require that the Brune [1970] model provides a statistically better fit to each spectral ratio than a linear model and that the variance is low between the data and model. To further ensure that the fit to the corner frequency is not influenced by resonance of the OBSs, we require a low variance close to the modeled corner frequency. Error bars on corner frequency were obtained through a grid search method where variance is within 10% of the best-fit value. Without imposing restrictive selection criteria, slight variations in corner frequencies from rupture patches and rupture barriers are not discernable. Using well-constrained source parameters, we find an average stress drop of 5.7 MPa in the aftershock zone compared to values of 2.4 and 2.9 MPa in the foreshock and swarm zones respectively. The higher stress drops in the rupture patch compared to the rupture barriers reflect systematic differences in along strike fault zone properties on Gofar transform fault.
Elastic parabolic equation solutions for underwater acoustic problems using seismic sources.
Frank, Scott D; Odom, Robert I; Collis, Jon M
2013-03-01
Several problems of current interest involve elastic bottom range-dependent ocean environments with buried or earthquake-type sources, specifically oceanic T-wave propagation studies and interface wave related analyses. Additionally, observed deep shadow-zone arrivals are not predicted by ray theoretic methods, and attempts to model them with fluid-bottom parabolic equation solutions suggest that it may be necessary to account for elastic bottom interactions. In order to study energy conversion between elastic and acoustic waves, current elastic parabolic equation solutions must be modified to allow for seismic starting fields for underwater acoustic propagation environments. Two types of elastic self-starter are presented. An explosive-type source is implemented using a compressional self-starter and the resulting acoustic field is consistent with benchmark solutions. A shear wave self-starter is implemented and shown to generate transmission loss levels consistent with the explosive source. Source fields can be combined to generate starting fields for source types such as explosions, earthquakes, or pile driving. Examples demonstrate the use of source fields for shallow sources or deep ocean-bottom earthquake sources, where down slope conversion, a known T-wave generation mechanism, is modeled. Self-starters are interpreted in the context of the seismic moment tensor.
Sources of shaking and flooding during the Tohoku-Oki earthquake: a mixture of rupture styles
Wei, Shengji; Graves, Robert; Helmberger, Don; Avouac, Jean-Philippe; Jiang, Junle
2012-01-01
Modeling strong ground motions from great subduction zone earthquakes is one of the great challenges of computational seismology. To separate the rupture characteristics from complexities caused by 3D sub-surface geology requires an extraordinary data set such as provided by the recent Mw9.0 Tohoku-Oki earthquake. Here we combine deterministic inversion and dynamically guided forward simulation methods to model over one thousand high-rate GPS and strong motion observations from 0 to 0.25 Hz across the entire Honshu Island. Our results display distinct styles of rupture with a deeper generic interplate event (~Mw8.5) transitioning to a shallow tsunamigenic earthquake (~Mw9.0) at about 25 km depth in a process driven by a strong dynamic weakening mechanism, possibly thermal pressurization. This source model predicts many important features of the broad set of seismic, geodetic and seafloor observations providing a major advance in our understanding of such great natural hazards.
Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake
Hayes, Gavin P.
2011-01-01
On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.
NASA Astrophysics Data System (ADS)
Ragon, T.; Sladen, A.; Bletery, Q.; Simons, M.; Magnoni, F.; Avallone, A.; Cavalié, O.; Vergnolle, M.
2016-12-01
Despite the diversity of available data for the Mw 6.1 2009 earthquake in L'Aquila, Italy, published finite fault slip models are surprisingly different. For instance, the amplitude of the maximum coseismic slip patch varies from 80cm to 225cm, and its depth oscillates between 5 and 15km. Discrepancies between proposed source parameters are believed to result from three sources: observational uncertainties, epistemic uncertainties, and the inherent non-uniqueness of inverse problems. We explore the whole solution space of fault-slip models compatible with the data within the range of both observational and epistemic uncertainties by performing a fully Bayesian analysis. In this initial stage, we restrict our analysis to the static problem.In terms of observation uncertainty, we must take into account the difference in time span associated with the different data types: InSAR images provide excellent spatial coverage but usually correspond to a period of a few days to weeks after the mainshock and can thus be potentially biased by significant afterslip. Continuous GPS stations do not have the same shortcoming, but in contrast do not have the desired spatial coverage near the fault. In the case of the L'Aquila earthquake, InSAR images include a minimum of 6 days of afterslip. Here, we explicitly account for these different time windows in the inversion by jointly inverting for coseismic and post-seismic fault slip. Regarding epistemic or modeling uncertainties, we focus on the impact of uncertain fault geometry and elastic structure. Modeling errors, which result from inaccurate model predictions and are generally neglected, are estimated for both earth model and fault geometry as non-diagonal covariance matrices. The L'Aquila earthquake is particularly suited to investigation of these effects given the availability of a detailed aftershock catalog and 3D velocity models. This work aims at improving our knowledge of the L'Aquila earthquake as well as at providing a more general perspective on which uncertainties are the most critical in finite-fault source studies.
NASA Astrophysics Data System (ADS)
Zhao, Fengfan; Meng, Lingyuan
2016-04-01
The April 20, 2013 Ms 7.0, earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process with the source mechanism and empirical relationships, estimated the strong ground motion in the near-fault field based on the Brune's circle model. A dynamical composite source model (DCSM) has been developed to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Moreover, we discussed the characteristics of the strong ground motion in the near-fault field, that the broadband synthetic seismogram ground motion predictions for Boxing and Lushan city produced larger peak values, shorter durations and higher frequency contents. It indicates that the factors in near-fault strong ground motion was under the influence of higher effect stress drop and asperity slip distributions on the fault plane. This work is financially supported by the Natural Science Foundation of China (Grant No. 41404045) and by Science for Earthquake Resilience of CEA (XH14055Y).
Automatic 3D Moment tensor inversions for southern California earthquakes
NASA Astrophysics Data System (ADS)
Liu, Q.; Tape, C.; Friberg, P.; Tromp, J.
2008-12-01
We present a new source mechanism (moment-tensor and depth) catalog for about 150 recent southern California earthquakes with Mw ≥ 3.5. We carefully select the initial solutions from a few available earthquake catalogs as well as our own preliminary 3D moment tensor inversion results. We pick useful data windows by assessing the quality of fits between the data and synthetics using an automatic windowing package FLEXWIN (Maggi et al 2008). We compute the source Fréchet derivatives of moment-tensor elements and depth for a recent 3D southern California velocity model inverted based upon finite-frequency event kernels calculated by the adjoint methods and a nonlinear conjugate gradient technique with subspace preconditioning (Tape et al 2008). We then invert for the source mechanisms and event depths based upon the techniques introduced by Liu et al 2005. We assess the quality of this new catalog, as well as the other existing ones, by computing the 3D synthetics for the updated 3D southern California model. We also plan to implement the moment-tensor inversion methods to automatically determine the source mechanisms for earthquakes with Mw ≥ 3.5 in southern California.
1979-02-01
jm.. W 112.11111 * I 120 11 11111.258 MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU OF STANOARDS-19b3-A 0 - SYSTEMS, SCIENCE AND SOFTWARE * SSS-R-79...3933 0AUTOMATED MAGNITUDE MEASURES, EARTHQUAKE SOURCE MODELING, VFM DISCRIMINANT TESTING AND SUMMARY OF CURRENT RESEARCH T. C. BACHE S. M. DAY J. M...VFM DISCRIMINANT . PERFORMING ORG. REPORT NUMBER TESTING AND SUMMARY OF CURRENT RESEARCH SSS-R-79-3933 7. AUTmOR(s) 8. CONTRACT OR GRANT NUMBERtSi T
NASA Astrophysics Data System (ADS)
Melgar, Diego; Geng, Jianghui; Crowell, Brendan W.; Haase, Jennifer S.; Bock, Yehuda; Hammond, William C.; Allen, Richard M.
2015-07-01
Real-time high-rate geodetic data have been shown to be useful for rapid earthquake response systems during medium to large events. The 2014 Mw6.1 Napa, California earthquake is important because it provides an opportunity to study an event at the lower threshold of what can be detected with GPS. We show the results of GPS-only earthquake source products such as peak ground displacement magnitude scaling, centroid moment tensor (CMT) solution, and static slip inversion. We also highlight the retrospective real-time combination of GPS and strong motion data to produce seismogeodetic waveforms that have higher precision and longer period information than GPS-only or seismic-only measurements of ground motion. We show their utility for rapid kinematic slip inversion and conclude that it would have been possible, with current real-time infrastructure, to determine the basic features of the earthquake source. We supplement the analysis with strong motion data collected close to the source to obtain an improved postevent image of the source process. The model reveals unilateral fast propagation of slip to the north of the hypocenter with a delayed onset of shallow slip. The source model suggests that the multiple strands of observed surface rupture are controlled by the shallow soft sediments of Napa Valley and do not necessarily represent the intersection of the main faulting surface and the free surface. We conclude that the main dislocation plane is westward dipping and should intersect the surface to the east, either where the easternmost strand of surface rupture is observed or at the location where the West Napa fault has been mapped in the past.
The August 2011 Virginia and Colorado Earthquake Sequences: Does Stress Drop Depend on Strain Rate?
NASA Astrophysics Data System (ADS)
Abercrombie, R. E.; Viegas, G.
2011-12-01
Our preliminary analysis of the August 2011 Virginia earthquake sequence finds the earthquakes to have high stress drops, similar to those of recent earthquakes in NE USA, while those of the August 2011 Trinidad, Colorado, earthquakes are moderate - in between those typical of interplate (California) and the east coast. These earthquakes provide an unprecedented opportunity to study such source differences in detail, and hence improve our estimates of seismic hazard. Previously, the lack of well-recorded earthquakes in the eastern USA severely limited our resolution of the source processes and hence the expected ground accelerations. Our preliminary findings are consistent with the idea that earthquake faults strengthen during longer recurrence times and intraplate faults fail at higher stress (and produce higher ground accelerations) than their interplate counterparts. We use the empirical Green's function (EGF) method to calculate source parameters for the Virginia mainshock and three larger aftershocks, and for the Trinidad mainshock and two larger foreshocks using IRIS-available stations. We select time windows around the direct P and S waves at the closest stations and calculate spectral ratios and source time functions using the multi-taper spectral approach (eg. Viegas et al., JGR 2010). Our preliminary results show that the Virginia sequence has high stress drops (~100-200 MPa, using Madariaga (1976) model), and the Colorado sequence has moderate stress drops (~20 MPa). These numbers are consistent with previous work in the regions, for example the Au Sable Forks (2002) earthquake, and the 2010 Germantown (MD) earthquake. We also calculate the radiated seismic energy and find the energy/moment ratio to be high for the Virginia earthquakes, and moderate for the Colorado sequence. We observe no evidence of a breakdown in constant stress drop scaling in this limited number of earthquakes. We extend our analysis to a larger number of earthquakes and stations. We calculate uncertainties in all our measurements, and also consider carefully the effects of variation in available bandwidth in order to improve our constraints on the source parameters.
Earthquake-origin expansion of the Earth inferred from a spherical-Earth elastic dislocation theory
NASA Astrophysics Data System (ADS)
Xu, Changyi; Sun, Wenke
2014-12-01
In this paper, we propose an approach to compute the coseismic Earth's volume change based on a spherical-Earth elastic dislocation theory. We present a general expression of the Earth's volume change for three typical dislocations: the shear, tensile and explosion sources. We conduct a case study for the 2004 Sumatra earthquake (Mw9.3), the 2010 Chile earthquake (Mw8.8), the 2011 Tohoku-Oki earthquake (Mw9.0) and the 2013 Okhotsk Sea earthquake (Mw8.3). The results show that mega-thrust earthquakes make the Earth expand and earthquakes along a normal fault make the Earth contract. We compare the volume changes computed for finite fault models and a point source of the 2011 Tohoku-Oki earthquake (Mw9.0). The big difference of the results indicates that the coseismic changes in the Earth's volume (or the mean radius) are strongly dependent on the earthquakes' focal mechanism, especially the depth and the dip angle. Then we estimate the cumulative volume changes by historical earthquakes (Mw ≥ 7.0) since 1960, and obtain an Earth mean radius expanding rate about 0.011 mm yr-1.
Italian Case Studies Modelling Complex Earthquake Sources In PSHA
NASA Astrophysics Data System (ADS)
Gee, Robin; Peruzza, Laura; Pagani, Marco
2017-04-01
This study presents two examples of modelling complex seismic sources in Italy, done in the framework of regional probabilistic seismic hazard assessment (PSHA). The first case study is for an area centred around Collalto Stoccaggio, a natural gas storage facility in Northern Italy, located within a system of potentially seismogenic thrust faults in the Venetian Plain. The storage exploits a depleted natural gas reservoir located within an actively growing anticline, which is likely driven by the Montello Fault, the underlying blind thrust. This fault has been well identified by microseismic activity (M<2) detected by a local seismometric network installed in 2012 (http://rete-collalto.crs.inogs.it/). At this time, no correlation can be identified between the gas storage activity and local seismicity, so we proceed with a PSHA that considers only natural seismicity, where the rates of earthquakes are assumed to be time-independent. The source model consists of faults and distributed seismicity to consider earthquakes that cannot be associated to specific structures. All potentially active faults within 50 km of the site are considered, and are modelled as 3D listric surfaces, consistent with the proposed geometry of the Montello Fault. Slip rates are constrained using available geological, geophysical and seismological information. We explore the sensitivity of the hazard results to various parameters affected by epistemic uncertainty, such as ground motions prediction equations with different rupture-to-site distance metrics, fault geometry, and maximum magnitude. The second case is an innovative study, where we perform aftershock probabilistic seismic hazard assessment (APSHA) in Central Italy, following the Amatrice M6.1 earthquake of August 24th, 2016 (298 casualties) and the subsequent earthquakes of Oct 26th and 30th (M6.1 and M6.6 respectively, no deaths). The aftershock hazard is modelled using a fault source with complex geometry, based on literature data and field evidence associated with the August mainshock. Earthquake activity rates during the very first weeks after the deadly earthquake were used to calibrated an Omori-Utsu decay curve, and the magnitude distribution of aftershocks is assumed to follow a Gutenberg-Richter distribution. We apply uniform and non-uniform spatial distribution of the seismicity across the fault source, by modulating the rates as a decreasing function of distance from the mainshock. The hazard results are computed for short-exposure periods (1 month, before the occurrences of October earthquakes) and compared to the background hazard given by law (MPS04), and to observations at some reference sites. We also show the results of disaggregation computed for the city of Amatrice. Finally, we attempt to update the results in light of the new "main" events that occurred afterwards in the region. All source modeling and hazard calculations are performed using the OpenQuake engine. We discuss the novelties of these works, and the benefits and limitations of both analyses, particularly in such different contexts of seismic hazard.
NASA Astrophysics Data System (ADS)
Meng, L.; Shi, B.
2011-12-01
The New Zealand Earthquake of February 21, 2011, Mw 6.1 occurred in the South Island, New Zealand with the epicenter at longitude 172.70°E and latitude 43.58°S, and with depth of 5 km. The Mw 6.1 earthquake occurred on an unknown blind fault involving oblique-thrust faulting, which is 9 km away from southern of the Christchurch, the third largest city of New Zealand, with a striking direction from east toward west (United State Geology Survey, USGS, 2011). The earthquake killed at least 163 people and caused a lot of construction damages in Christchurch city. The Peak Ground Acceleration (PGA) observed at station Heathcote Valley Primary School (HVSC), which is 1 km away from the epicenter, is up to almost 2.0g. The ground-motion observation suggests that the buried earthquake source generates much higher near-fault ground motion. In this study, we have analyzed the earthquake source spectral parameters based on the strong motion observations, and estimated the near-fault ground motion based on the Brune's circular fault model. The results indicate that the larger ground motion may be caused by a higher dynamic stress drop,Δσd , or effect stress drop named by Brune, in the major source rupture region. In addition, a dynamical composite source model (DCSM) has been developed to simulate the near-fault strong ground motion with associated fault rupture properties from the kinematic point of view. For comparison purpose, we also conducted the broadband ground motion predictions for the station of HVSC; the synthetic seismogram of time histories produced for this station has good agreement with the observations in the waveforms, peak values and frequency contents, which clearly indicate that the higher dynamic stress drop during the fault rupture may play an important role to the anomalous ground-motion amplification. The preliminary simulated result illustrated in at Station HVSC is that the synthetics seismograms have a realistic appearance in the waveform and time duration to the observations, especially for the vertical component. Synthetics Fourier spectra are reasonably similar to the recordings. The simulated PGA values of vertical and S26W components are consistent with the recorded, and for the S64E component, the PGA derived from our simulation is smaller than that from observation. The resultant Fourier spectra both for the synthetic and observation is much similar with each other for three components of acceleration time histories, except for the vertical component, where the derived spectra from synthetic data is smaller than that resultant from observation when the frequency is above 10 Hz. Both theoretical study and numerical simulation indicate that, for the 2011 Mw 6.1, New Zealand Earthquake, the higher dynamic stress drop during the source rupture process could play an important role to the anomalous ground-motion amplification beside to the other site-related seismic effects. The composite source modeling based on the simple Brune's pulse model could approximately provide us a good insight into earthquake source related rupture processes for a moderate-sized earthquake.
Regional Wave Propagation in Southeastern United States
NASA Astrophysics Data System (ADS)
Jemberie, A. L.; Langston, C. A.
2003-12-01
Broad band seismograms from the April 29, 2003, M4.6 Fort Payne, Alabama earthquake are analyzed to infer mechanisms of crustal wave propagation, crust and upper mantle velocity structure in southeastern United States, and source parameters of the event. In particular, we are interested in producing deterministic models of the distance attenuation of earthquake ground motions through computation of synthetic seismograms. The method first requires constraining the source parameters of an earthquake and then modeling the amplitude and times of broadband arrivals within the waveforms to infer appropriate layered earth models. A first look at seismograms recorded by stations outside the Mississippi Embayment (ME) show clear body phases such P, sP, Pnl, Sn and Lg. The ME signals are qualitatively different from others because they have longer durations and large surface waves. A straightforward interpretation of P wave arrival times shows a typical upper mantle velocity of 8.18 km/s. However, there is evidence of significantly higher P phase velocities at epicentral distances between 400 and 600km, that may be caused by a high velocity upper mantle anomaly; triplication of P-waves is seen in these seismograms. The arrival time differences between regional P and the depth phase sP at different stations are used to constrain the depth of the earthquake. The source depth lies between 9.5 km and 13km which is somewhat more shallow than the network location that was constrained to 15km depth. The Fort Payne earthquake is the largest earthquake to have occurred within the Eastern Tennessee Seismic Zone.
NASA Astrophysics Data System (ADS)
Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.
2015-12-01
Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity. Earthquake cycle and dynamic rupture models containing deep asperities reproduce the slower spectral decay found in teleseismic spectra of the Gorkha earthquake and in subduction events in the deeper edge of the seismogenic zone.
NASA Astrophysics Data System (ADS)
Orpin, Alan R.; Rickard, Graham J.; Gerring, Peter K.; Lamarche, Geoffroy
2016-05-01
Devastating tsunami over the last decade have significantly heightened awareness of the potential consequences and vulnerability of low-lying Pacific islands and coastal regions. Our appraisal of the potential tsunami hazard for the atolls of the Tokelau Islands is based on a tsunami source-propagation-inundation model using Gerris Flow Solver, adapted from the companion study by Lamarche et al. (2015) for the islands of Wallis and Futuna. We assess whether there is potential for tsunami flooding on any of the village islets from a selection of 14 earthquake-source experiments. These earthquake sources are primarily based on the largest Pacific earthquakes of Mw ≥ 8.1 since 1950 and other large credible sources of tsunami that may impact Tokelau. Earthquake-source location and moment magnitude are related to tsunami-wave amplitudes and tsunami flood depths simulated for each of the three atolls of Tokelau. This approach yields instructive results for a community advisory but is not intended to be fully deterministic. Rather, the underlying aim is to identify credible sources that present the greatest potential to trigger an emergency response. Results from our modelling show that wave fields are channelled by the bathymetry of the Pacific basin in such a way that the swathes of the highest waves sweep immediately northeast of the Tokelau Islands. Our limited simulations suggest that trans-Pacific tsunami from distant earthquake sources to the north of Tokelau pose the most significant inundation threat. In particular, our assumed worst-case scenario for the Kuril Trench generated maximum modelled-wave amplitudes in excess of 1 m, which may last a few hours and include several wave trains. Other sources can impact specific sectors of the atolls, particularly distant earthquakes from Chile and Peru, and regional earthquake sources to the south. Flooding is dependent on the wave orientation and direct alignment to the incoming tsunami. Our "worst-case" tsunami simulations of the Tokelau Islands suggest that dry areas remain around the villages, which are typically built on a high islet. Consistent with the oral history of little or no perceived tsunami threat, simulations from the recent Tohoku and Chile earthquake sources suggest only limited flooding around low-lying islets of the atoll. Where potential tsunami flooding is inferred from the modelling, recommended minimum evacuation heights above local sea level are compiled, with particular attention paid to variations in tsunami flood depth around the atolls, subdivided into directional quadrants around each atoll. However, complex wave behaviours around the atolls, islets, tidal channels and within the lagoons are also observed in our simulations. Wave amplitudes within the lagoons may exceed 50 cm, increasing any inundation and potential hazards on the inner shoreline of the atolls, which in turn may influence evacuation strategies. Our study shows that indicative simulation studies can be achieved even with only basic field information. In part, this is due to the spatially and vertically limited topography of the atoll, short reef flat and steep seaward bathymetry, and the simple depth profile of the lagoon bathymetry.
NASA Astrophysics Data System (ADS)
Lin, T. C.; Hu, F.; Chen, X.; Lee, S. J.; Hung, S. H.
2017-12-01
Kinematic source model is widely used for the simulation of an earthquake, because of its simplicity and ease of application. On the other hand, dynamic source model is a more complex but important tool that can help us to understand the physics of earthquake initiation, propagation, and healing. In this study, we focus on the southernmost Ryukyu Trench which is extremely close to northern Taiwan. Interseismic GPS data in northeast Taiwan shows a pattern of strain accumulation, which suggests the maximum magnitude of a potential future earthquake in this area is probably about magnitude 8.7. We develop dynamic rupture models for the hazard estimation of the potential megathrust event based on the kinematic rupture scenarios which are inverted using the interseismic GPS data. Besides, several kinematic source rupture scenarios with different characterized slip patterns are also considered to constrain the dynamic rupture process better. The initial stresses and friction properties are tested using the trial-and-error method, together with the plate coupling and tectonic features. An analysis of the dynamic stress field associated with the slip prescribed in the kinematic models can indicate possible inconsistencies with physics of faulting. Furthermore, the dynamic and kinematic rupture models are considered to simulate the ground shaking from based on 3-D spectral-element method. We analyze ShakeMap and ShakeMovie from the simulation results to evaluate the influence over the island between different source models. A dispersive tsunami-propagation simulation is also carried out to evaluate the maximum tsunami wave height along the coastal areas of Taiwan due to coseismic seafloor deformation of different source models. The results of this numerical simulation study can provide a physically-based information of megathrust earthquake scenario for the emergency response agency to take the appropriate action before the really big one happens.
Viscoelastic-coupling model for the earthquake cycle driven from below
Savage, J.C.
2000-01-01
In a linear system the earthquake cycle can be represented as the sum of a solution which reproduces the earthquake cycle itself (viscoelastic-coupling model) and a solution that provides the driving force. We consider two cases, one in which the earthquake cycle is driven by stresses transmitted along the schizosphere and a second in which the cycle is driven from below by stresses transmitted along the upper mantle (i.e., the schizosphere and upper mantle, respectively, act as stress guides in the lithosphere). In both cases the driving stress is attributed to steady motion of the stress guide, and the upper crust is assumed to be elastic. The surface deformation that accumulates during the interseismic interval depends solely upon the earthquake-cycle solution (viscoelastic-coupling model) not upon the driving source solution. Thus geodetic observations of interseismic deformation are insensitive to the source of the driving forces in a linear system. In particular, the suggestion of Bourne et al. [1998] that the deformation that accumulates across a transform fault system in the interseismic interval is a replica of the deformation that accumulates in the upper mantle during the same interval does not appear to be correct for linear systems.
NASA Astrophysics Data System (ADS)
Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.
1992-09-01
A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.
Added-value joint source modelling of seismic and geodetic data
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank
2013-04-01
In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.
Infrasound as a Depth Discriminant
2011-09-01
INFRASOUND AS A DEPTH DISCRIMINANT Stephen J. Arrowsmith, Rod W. Whitaker, and Richard J. Stead Los Alamos National Laboratory Sponsored by the... infrasound from earthquakes, in conjunction with modeling, to better constrain our understanding of the generation of infrasound from earthquakes, in...particular the effect of source depth. Here, we first outline a systematic search for infrasound from earthquakes from a range of magnitudes. Based
Earthquake Hoax in Ghana: Exploration of the Cry Wolf Hypothesis
Aikins, Moses; Binka, Fred
2012-01-01
This paper investigated the belief of the news of impending earthquake from any source in the context of the Cry Wolf hypothesis as well as the belief of the news of any other imminent disaster from any source. We were also interested in the correlation between preparedness, risk perception and antecedents. This explorative study consisted of interviews, literature and Internet reviews. Sampling was of a simple random nature. Stratification was carried out by sex and residence type. The sample size of (N=400), consisted of 195 males and 205 Females. Further stratification was based on residential classification used by the municipalities. The study revealed that a person would believe news of an impending earthquake from any source, (64.4%) and a model significance of (P=0.000). It also showed that a person would believe news of any other impending disaster from any source, (73.1%) and a significance of (P=0.003). There is association between background, risk perception and preparedness. Emergency preparedness is weak. Earthquake awareness needs to be re-enforced. There is a critical need for public education of earthquake preparedness. The authors recommend developing emergency response program for earthquakes, standard operating procedures for a national risk communication through all media including instant bulk messaging. PMID:28299086
Surface Rupture Effects on Earthquake Moment-Area Scaling Relations
NASA Astrophysics Data System (ADS)
Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro
2017-09-01
Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.
Tsunami Generation Modelling for Early Warning Systems
NASA Astrophysics Data System (ADS)
Annunziato, A.; Matias, L.; Ulutas, E.; Baptista, M. A.; Carrilho, F.
2009-04-01
In the frame of a collaboration between the European Commission Joint Research Centre and the Institute of Meteorology in Portugal, a complete analytical tool to support Early Warning Systems is being developed. The tool will be part of the Portuguese National Early Warning System and will be used also in the frame of the UNESCO North Atlantic Section of the Tsunami Early Warning System. The system called Tsunami Analysis Tool (TAT) includes a worldwide scenario database that has been pre-calculated using the SWAN-JRC code (Annunziato, 2007). This code uses a simplified fault generation mechanism and the hydraulic model is based on the SWAN code (Mader, 1988). In addition to the pre-defined scenario, a system of computers is always ready to start a new calculation whenever a new earthquake is detected by the seismic networks (such as USGS or EMSC) and is judged capable to generate a Tsunami. The calculation is performed using minimal parameters (epicentre and the magnitude of the earthquake): the programme calculates the rupture length and rupture width by using empirical relationship proposed by Ward (2002). The database calculations, as well the newly generated calculations with the current conditions are therefore available to TAT where the real online analysis is performed. The system allows to analyze also sea level measurements available worldwide in order to compare them and decide if a tsunami is really occurring or not. Although TAT, connected with the scenario database and the online calculation system, is at the moment the only software that can support the tsunami analysis on a global scale, we are convinced that the fault generation mechanism is too simplified to give a correct tsunami prediction. Furthermore short tsunami arrival times especially require a possible earthquake source parameters data on tectonic features of the faults like strike, dip, rake and slip in order to minimize real time uncertainty of rupture parameters. Indeed the earthquake parameters available right after an earthquake are preliminary and could be inaccurate. Determining which earthquake source parameters would affect the initial height and time series of tsunamis will show the sensitivity of the tsunami time series to seismic source details. Therefore a new fault generation model will be adopted, according to the seismotectonics properties of the different regions, and finally included in the calculation scheme. In order to do this, within the collaboration framework of Portuguese authorities, a new model is being defined, starting from the seismic sources in the North Atlantic, Caribbean and Gulf of Cadiz. As earthquakes occurring in North Atlantic and Caribbean sources may affect Portugal mainland, the Azores and Madeira archipelagos also these sources will be included in the analysis. Firstly we have started to examine the geometries of those sources that spawn tsunamis to understand the effect of fault geometry and depths of earthquakes. References: Annunziato, A., 2007. The Tsunami Assesment Modelling System by the Joint Research Center, Science of Tsunami Hazards, Vol. 26, pp. 70-92. Mader, C.L., 1988. Numerical modelling of water waves, University of California Press, Berkeley, California. Ward, S.N., 2002. Tsunamis, Encyclopedia of Physical Science and Technology, Vol. 17, pp. 175-191, ed. Meyers, R.A., Academic Press.
NASA Astrophysics Data System (ADS)
Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.
2004-12-01
This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.
Engdahl, E.R.; Billington, S.; Kisslinger, C.
1989-01-01
The Andreanof Islands earthquake (Mw 8.0) is the largest event to have occurred in that section of the Aleutian arc since the March 9, 1957, Aleutian Islands earthquake (Mw 8.6). Teleseismically well-recorded earthquakes in the region of the 1986 earthquake are relocated with a plate model and with careful attention to the focal depths. The data set is nearly complete for mb???4.7 between longitudes 172??W and 179??W for the period 1964 through April 1987 and provides a detailed description of the space-time history of moderate-size earthquakes in the region for that period. Additional insight is provided by source parameters which have been systematically determined for Mw???5 earthquakes that occurred in the region since 1977 and by a modeling study of the spatial distribution of moment release on the mainshock fault plane. -from Authors
NASA Astrophysics Data System (ADS)
Yun, S.; Koketsu, K.; Aoki, Y.
2014-12-01
The September 4, 2010, Canterbury earthquake with a moment magnitude (Mw) of 7.1 is a crustal earthquake in the South Island, New Zealand. The February 22, 2011, Christchurch earthquake (Mw=6.3) is the biggest aftershock of the 2010 Canterbury earthquake that is located at about 50 km to the east of the mainshock. Both earthquakes occurred on previously unrecognized faults. Field observations indicate that the rupture of the 2010 Canterbury earthquake reached the surface; the surface rupture with a length of about 30 km is located about 4 km south of the epicenter. Also various data including the aftershock distribution and strong motion seismograms suggest a very complex rupture process. For these reasons it is useful to investigate the complex rupture process using multiple data with various sensitivities to the rupture process. While previously published source models are based on one or two datasets, here we infer the rupture process with three datasets, InSAR, strong-motion, and teleseismic data. We first performed point source inversions to derive the focal mechanism of the 2010 Canterbury earthquake. Based on the focal mechanism, the aftershock distribution, the surface fault traces and the SAR interferograms, we assigned several source faults. We then performed the joint inversion to determine the rupture process of the 2010 Canterbury earthquake most suitable for reproducing all the datasets. The obtained slip distribution is in good agreement with the surface fault traces. We also performed similar inversions to reveal the rupture process of the 2011 Christchurch earthquake. Our result indicates steep dip and large up-dip slip. This reveals the observed large vertical ground motion around the source region is due to the rupture process, rather than the local subsurface structure. To investigate the effects of the 3-D velocity structure on characteristic strong motion seismograms of the two earthquakes, we plan to perform the inversion taking 3-D velocity structure of this region into account.
The Ust'-Kamchatsk "Tsunami Earthquake" of 13 April 1923: A Slow Event and a Probable Landslide
NASA Astrophysics Data System (ADS)
Salaree, A.; Okal, E.
2016-12-01
Among the "tsunami earthquakes" having generated a larger tsunami than expected from their seismic magnitudes, the large aftershock of the great Kamchatka earthquake of 1923 remains an intriguing puzzle since waves reaching 11 m were reported by Troshin & Diagilev (1926), in the vicinity of the mouth of the Kamchatka River near the coastal settlement of Ust'-Kamchatsk. Our relocation attempts based on ISS-listed travel times would put the earthquake epicenter in Ozernoye Bay, North of the Kamchatka Peninsula, suggesting that it was triggered by stress transfer beyond the plate junction at the Kamchatka corner. Mantle magnitudes obtained from Golitsyn records at De Bilt suggest a long-period moment of 2-3 times 1027 dyn*cm, with a strong increase of moment with period, suggestive of a slow source. However, tsunami simulations based on resulting models of the earthquake source, both North and South of the Kamchatka Peninsula, fail to account for the reported run-up values. On the other hand, the model of an underwater landslide, which would have been triggered by the earthquake, can explain the general amplitude and distribution of reported run-up. This model is supported by the presence of steep bathymetry offshore of Ust'-Kamchatsk, near the area of discharge of the Kamchatka River, and the abundance of subaerial landslides along the nearby coasts of the Kamchatka Peninsula. While the scarcity of scientific data for this ancient earthquake, and of historical reports in a sparsely populated area, keep this interpretation tentative, this study contributes to improving our knowledge of the challenging family of "tsunami earthquakes".
Database of potential sources for earthquakes larger than magnitude 6 in Northern California
,
1996-01-01
The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.
NASA Astrophysics Data System (ADS)
Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Morikawa, N.; Kawai, S.; Ohsumi, T.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.
2016-12-01
For the forthcoming Nankai earthquake with M8 to M9 class, the Earthquake Research Committee(ERC)/Headquarters for Earthquake Research Promotion, Japanese government (2013) showed 15 examples of earthquake source areas (ESAs) as possible combinations of 18 sub-regions (6 segments along trough and 3 segments normal to trough) and assessed the occurrence probability within the next 30 years (from Jan. 1, 2013) was 60% to 70%. Hirata et al.(2015, AGU) presented Probabilistic Tsunami Hazard Assessment (PTHA) along Nankai Trough in the case where diversity of the next event's ESA is modeled by only the 15 ESAs. In this study, we newly set 70 ESAs in addition of the previous 15 ESAs so that total of 85 ESAs are considered. By producing tens of faults models, with various slip distribution patterns, for each of 85 ESAs, we obtain 2500 fault models in addition of previous 1400 fault models so that total of 3900 fault models are considered to model the diversity of the next Nankai earthquake rupture (Toyama et al.,2015, JpGU). For PTHA, the occurrence probability of the next Nankai earthquake is distributed to possible 3900 fault models in the viewpoint of similarity to the 15 ESAs' extents (Abe et al.,2015, JpGU). A major concept of the occurrence probability distribution is; (i) earthquakes rupturing on any of 15 ESAs that ERC(2013) showed most likely occur, (ii) earthquakes rupturing on any of ESAs whose along-trench extent is the same as any of 15 ESAs but trough-normal extent differs from it second likely occur, (iii) earthquakes rupturing on any of ESAs whose both of along-trough and trough-normal extents differ from any of 15 ESAs rarely occur. Procedures for tsunami simulation and probabilistic tsunami hazard synthesis are the same as Hirata et al (2015). A tsunami hazard map, synthesized under an assumption that the Nankai earthquakes can be modeled as a renewal process based on BPT distribution with a mean recurrence interval of 88.2 years (ERC, 2013) and an aperiodicity of 0.22, as the median of the values (0.20 to 0.24)that ERC (2013) recommended, suggests that several coastal segments along the southwest coast of Shikoku Island, the southeast coast of Kii Peninsula, and the west coast of Izu Peninsula show over 26 % in exceedance probability that maximum water rise exceeds 10 meters at any coastal point within the next 30 years.
NASA Astrophysics Data System (ADS)
Gabriel, A. A.; Madden, E. H.; Ulrich, T.; Wollherr, S.
2016-12-01
Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.
Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach
NASA Astrophysics Data System (ADS)
Denolle, M.; Van Houtte, C.
2017-12-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.
NASA Astrophysics Data System (ADS)
Kaneko, Yoshihiro; Wallace, Laura M.; Hamling, Ian J.; Gerstenberger, Matthew C.
2018-05-01
Slow slip events (SSEs) have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. Here we develop a relatively simple, simulation-based method for estimating the probability of megathrust earthquakes following tectonic events that induce any transient stress perturbations. This method has been applied to the locked Hikurangi megathrust (New Zealand) surrounded on all sides by the 2016 Kaikoura earthquake and SSEs. Our models indicate the annual probability of a M≥7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3-18 times relative to the pre-Kaikoura probability, and the absolute probability is in the range of 0.6-7%. We find that probabilities of a large earthquake are mainly controlled by the ratio of the total stressing rate induced by all nearby tectonic sources to the mean stress drop of earthquakes. Our method can be applied to evaluate the potential for triggering a megathrust earthquake following SSEs in other subduction zones.
The 2016 central Italy earthquake sequence: surface effects, fault model and triggering scenarios
NASA Astrophysics Data System (ADS)
Chatzipetros, Alexandros; Pavlides, Spyros; Papathanassiou, George; Sboras, Sotiris; Valkaniotis, Sotiris; Georgiadis, George
2017-04-01
The results of fieldwork performed during the 2016 earthquake sequence around the karstic basins of Norcia and La Piana di Castelluccio, at an altitude of 1400 m, on the Monte Vettore (altitude 2476 m) and Vettoretto, as well as the three mapped seismogenic faults, striking NNW-SSW, are presented in this paper. Surface co-seismic ruptures were observed in the Vettore and Vettoretto segment of the fault for several kilometres ( 7 km) in the August earthquakes at high altitudes, and were re-activated and expanded northwards during the October earthquakes. Coseismic ruptures and the neotectonic Mt. Vettore fault zone were modelled in detail using images acquired from specifically planned UAV (drone) flights. Ruptures, typically with displacement of up to 20 cm, were observed after the August event both in the scree and weathered mantle (elluvium), as well as the bedrock, consisting mainly of fragmented carbonate rocks with small tectonic surfaces. These fractures expanded and new ones formed during the October events, typically of displacements of up to 50 cm, although locally higher displacements of up to almost 2 m were observed. Hundreds of rock falls and landslides were mapped through satellite imagery, using pre- and post- earthquake Sentinel 2A images. Several of them were also verified in the field. Based on field mapping results and seismological information, the causative faults were modelled. The model consists of five seismogenic sources, each one associated with a strong event in the sequence. The visualisation of the seismogenic sources follows INGV's DISS standards for the Individual Seismogenic Sources (ISS) layer, while strike, dip and rake of the seismic sources are obtained from selected focal mechanisms. Based on this model, the ground deformation pattern was inferred, using Okada's dislocation solution formulae, which shows that the maximum calculated vertical displacement is 0.53 m. This is in good agreement with the statistical analysis of the observed surface rupture displacement. Stress transfer analysis was also performed in the five modelled seismogenic sources, using seismologically defined parameters. The resulting stress transfer pattern, based on the sequence of events, shows that the causative fault of each event was influenced by loading from the previous ones.
NASA Astrophysics Data System (ADS)
Suleimani, E.; Ruppert, N.; Fisher, M.; West, D.; Hansen, R.
2008-12-01
The Alaska Earthquake Information Center conducts tsunami inundation mapping for coastal communities in Alaska. For many locations in the Gulf of Alaska, the 1964 tsunami generated by the Mw9.2 Great Alaska earthquake may be the worst-case tsunami scenario. We use the 1964 tsunami observations to verify our numerical model of tsunami propagation and runup, therefore it is essential to use an adequate source function of the 1964 earthquake to reduce the level of uncertainty in the modeling results. It was shown that the 1964 co-seismic slip occurred both on the megathrust and crustal splay faults (Plafker, 1969). Plafker (2006) suggested that crustal faults were a major contributor to vertical displacements that generated local tsunami waves. Using eyewitness arrival times of the highest observed waves, he suggested that the initial tsunami wave was higher and closer to the shore, than if it was generated by slip on the megathrust. We conduct a numerical study of two different source functions of the 1964 tsunami to test whether the crustal splay faults had significant effects on local tsunami runup heights and arrival times. The first source function was developed by Johnson et al. (1996) through joint inversion of the far-field tsunami waveforms and geodetic data. The authors did not include crustal faults in the inversion, because the contribution of these faults to the far-field tsunami was negligible. The second is the new coseismic displacement model developed by Suito and Freymueller (2008, submitted). This model extends the Montague Island fault farther along the Kenai Peninsula coast and thus reduces slip on the megathrust in that region. We also use an improved geometry of the Patton Bay fault based on the deep crustal seismic reflection and earthquake data. We propagate tsunami waves generated by both source models across the Pacific Ocean and record wave amplitudes at the locations of the tide gages that recorded the 1964 tsunami. As expected, the two sources produce very similar waveforms in the far field that are also in good agreement with the tide gage records. In order to study the near-field tsunami effects, we will construct embedded telescoping bathymetry grids around tsunami generation area to calculate tsunami arrival times and sea surface heights for both source models of the 1964 earthquake, and use available observation data to verify the model results.
Bayesian exploration of recent Chilean earthquakes
NASA Astrophysics Data System (ADS)
Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Liang, Cunren; Agram, Piyush; Owen, Susan; Ortega, Francisco; Minson, Sarah
2016-04-01
The South-American subduction zone is an exceptional natural laboratory for investigating the behavior of large faults over the earthquake cycle. It is also a playground to develop novel modeling techniques combining different datasets. Coastal Chile was impacted by two major earthquakes in the last two years: the 2015 M 8.3 Illapel earthquake in central Chile and the 2014 M 8.1 Iquique earthquake that ruptured the central portion of the 1877 seismic gap in northern Chile. To gain better understanding of the distribution of co-seismic slip for those two earthquakes, we derive joint kinematic finite fault models using a combination of static GPS offsets, radar interferograms, tsunami measurements, high-rate GPS waveforms and strong motion data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the Green's functions. The results reveal different rupture behaviors for the 2014 Iquique and 2015 Illapel earthquakes. The 2014 Iquique earthquake involved a sharp slip zone and did not rupture to the trench. The 2015 Illapel earthquake nucleated close to the coast and propagated toward the trench with significant slip apparently reaching the trench or at least very close to the trench. At the inherent resolution of our models, we also present the relationship of co-seismic models to the spatial distribution of foreshocks, aftershocks and fault coupling models.
Application of Seismic Array Processing to Tsunami Early Warning
NASA Astrophysics Data System (ADS)
An, C.; Meng, L.
2015-12-01
Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800 instruments) and the Earthscope USArray Transportable Array (~400 instruments), are established.
NASA Astrophysics Data System (ADS)
Ragon, Théa; Sladen, Anthony; Simons, Mark
2018-05-01
The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)
NASA Astrophysics Data System (ADS)
Kumar, Naresh; Kumar, Parveen; Chauhan, Vishal; Hazarika, Devajit
2017-10-01
Strong-motion records of recent Gorkha Nepal earthquake ( M w 7.8), its strong aftershocks and seismic events of Hindu kush region have been analysed for estimation of source parameters. The M w 7.8 Gorkha Nepal earthquake of 25 April 2015 and its six aftershocks of magnitude range 5.3-7.3 are recorded at Multi-Parametric Geophysical Observatory, Ghuttu, Garhwal Himalaya (India) >600 km west from the epicentre of main shock of Gorkha earthquake. The acceleration data of eight earthquakes occurred in the Hindu kush region also recorded at this observatory which is located >1000 km east from the epicentre of M w 7.5 Hindu kush earthquake on 26 October 2015. The shear wave spectra of acceleration record are corrected for the possible effects of anelastic attenuation at both source and recording site as well as for site amplification. The strong-motion data of six local earthquakes are used to estimate the site amplification and the shear wave quality factor ( Q β) at recording site. The frequency-dependent Q β( f) = 124 f 0.98 is computed at Ghuttu station by using inversion technique. The corrected spectrum is compared with theoretical spectrum obtained from Brune's circular model for the horizontal components using grid search algorithm. Computed seismic moment, stress drop and source radius of the earthquakes used in this work range 8.20 × 1016-5.72 × 1020 Nm, 7.1-50.6 bars and 3.55-36.70 km, respectively. The results match with the available values obtained by other agencies.
A global earthquake discrimination scheme to optimize ground-motion prediction equation selection
Garcia, Daniel; Wald, David J.; Hearne, Michael
2012-01-01
We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.
Fault interaction and stress triggering of twentieth century earthquakes in Mongolia
Pollitz, F.; Vergnolle, M.; Calais, E.
2003-01-01
A cluster of exceptionally large earthquakes in the interior of Asia occurred from 1905 to 1967: the 1905 M7.9 Tsetserleg and M8.4 Bolnai earthquakes, the 1931 M8.0 Fu Yun earthquake, the 1957 M8.1 Gobi-Altai earthquake, and the 1967 M7.1 Mogod earthquake (sequence). Each of the larger (M ??? 8) earthquakes involved strike-slip faulting averaging more than 5 m and rupture lengths of several hundred kilometers. Available geologic data indicate that recurrence intervals on the major source faults are several thousands of years and distances of about 400 km separate the respective rupture areas. We propose that the occurrences of these and many smaller earthquakes are related and controlled to a large extent by stress changes generated by the compounded static deformation of the preceding earthquakes and subsequent viscoelastic relaxation of the lower crust and upper mantle beneath Mongolia. We employ a spherically layered viscoelastic model constrained by the 1994-2002 GPS velocity field in western Mongolia [Vergnolle et al., 2003]. Using the succession of twentieth century earthquakes as sources of deformation, we then analyze the time-dependent change in Coulomb failure stress (????f). At remote interaction distances, static ????f values are small. However, modeled postseismic stress changes typically accumulate to several tenths of a bar over time intervals of decades. Almost all significant twentieth century regional earthquakes (M ??? 6) with well-constrained fault geometry lie in positive ????f lobes of magnitude about +0.5 bar. Our results suggest that significant stress transfer is possible among continental faults separated by hundreds of kilometers and on timescales of decades. Copyright 2003 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Wu, B.; Oglesby, D. D.; Ghosh, A.; LI, B.
2017-12-01
Very low frequency earthquakes (VLFE) and low frequency earthquakes (LFE) are two main types of seismic signal that are observed during slow earthquakes. These phenomena differ from standard ("fast") earthquakes in many ways. In contrast to seismic signals generated by standard earthquakes, these two types of signal lack energy at higher frequencies, and have very low stress drops of around 10 kPa. In addition, the Moment-Duration scaling relationship shown by VLFEs and LFEs is linear(M T) instead of M T^3 for regular earthquakes. However, if investigated separately over a small range magnitudes and durations, the scaling relationship for each is somewhat closer to M T^3, not M T. The physical mechanism of VLFEs and LFEs is still not clear, although some models have explored this issue [e.g., Gomberg, 2016b]. Here we investigate the behavior of dynamic rupture models with a ductile-like viscous frictional property [Ando et al., 2010; Nakata et al., 2011; Ando et al., 2012] on a single patch. In the model's framework, VLFE source patches are characterized by a high viscous damping term η and a larger area( 25km^2), while sources that approach LFE properties have a low viscous damping term η and smaller patch area(<0.5km^2). Using both analytical and numerical analyses, we show how and why this model may help to explain current observations. This model supports the idea that VLFEs and LFEs are distinct events, possibly rupturing distinct patches with their own stress dynamics [Hutchison and Ghosh, 2016]. The model also makes predictions that can be tested in future observational experiments.
Broadband Ground Motion Simulation Recipe for Scenario Hazard Assessment in Japan
NASA Astrophysics Data System (ADS)
Koketsu, K.; Fujiwara, H.; Irikura, K.
2014-12-01
The National Seismic Hazard Maps for Japan, which consist of probabilistic seismic hazard maps (PSHMs) and scenario earthquake shaking maps (SESMs), have been published every year since 2005 by the Earthquake Research Committee (ERC) in the Headquarter for Earthquake Research Promotion, which was established in the Japanese government after the 1995 Kobe earthquake. The publication was interrupted due to problems in the PSHMs revealed by the 2011 Tohoku earthquake, and the Subcommittee for Evaluations of Strong Ground Motions ('Subcommittee') has been examining the problems for two and a half years (ERC, 2013; Fujiwara, 2014). However, the SESMs and the broadband ground motion simulation recipe used in them are still valid at least for crustal earthquakes. Here, we outline this recipe and show the results of validation tests for it.Irikura and Miyake (2001) and Irikura (2004) developed a recipe for simulating strong ground motions from future crustal earthquakes based on a characterization of their source models (Irikura recipe). The result of the characterization is called a characterized source model, where a rectangular fault includes a few rectangular asperities. Each asperity and the background area surrounding the asperities have their own uniform stress drops. The Irikura recipe defines the parameters of the fault and asperities, and how to simulate broadband ground motions from the characterized source model. The recipe for the SESMs was constructed following the Irikura recipe (ERC, 2005). The National Research Institute for Earth Science and Disaster Prevention (NIED) then made simulation codes along this recipe to generate SESMs (Fujiwara et al., 2006; Morikawa et al., 2011). The Subcommittee in 2002 validated a preliminary version of the SESM recipe by comparing simulated and observed ground motions for the 2000 Tottori earthquake. In 2007 and 2008, the Subcommittee carried out detailed validations of the current version of the SESM recipe and the NIED codes using ground motions from the 2005 Fukuoka earthquake. Irikura and Miyake (2011) summarized the latter validations, concluding that the ground motions were successfully simulated as shown in the figure. This indicates that the recipe has enough potential to generate broadband ground motions for scenario hazard assessment in Japan.
NASA Astrophysics Data System (ADS)
Necmioglu, O.; Meral Ozel, N.
2014-12-01
Accurate earthquake source parameters are essential for any tsunami hazard assessment and mitigation, including early warning systems. Complex tectonic setting makes the a priori accurate assumptions of earthquake source parameters difficult and characterization of the faulting type is a challenge. Information on tsunamigenic sources is of crucial importance in the Eastern Mediterranean and its Connected Seas, especially considering the short arrival times and lack of offshore sea-level measurements. In addition, the scientific community have had to abandon the paradigm of a ''maximum earthquake'' predictable from simple tectonic parameters (Ruff and Kanamori, 1980) in the wake of the 2004 Sumatra event (Okal, 2010) and one of the lessons learnt from the 2011 Tohoku event was that tsunami hazard maps may need to be prepared for infrequent gigantic earthquakes as well as more frequent smaller-sized earthquakes (Satake, 2011). We have initiated an extensive modeling study to perform a deterministic Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas. Characteristic earthquake source parameters (strike, dip, rake, depth, Mwmax) at each 0.5° x 0.5° size bin for 0-40 km depth (total of 310 bins) and for 40-100 km depth (total of 92 bins) in the Eastern Mediterranean, Aegean and Black Sea region (30°N-48°N and 22°E-44°E) have been assigned from the harmonization of the available databases and previous studies. These parameters have been used as input parameters for the deterministic tsunami hazard modeling. Nested Tsunami simulations of 6h duration with a coarse (2 arc-min) and medium (1 arc-min) grid resolution have been simulated at EC-JRC premises for Black Sea and Eastern and Central Mediterranean (30°N-41.5°N and 8°E-37°E) for each source defined using shallow water finite-difference SWAN code (Mader, 2004) for the magnitude range of 6.5 - Mwmax defined for that bin with a Mw increment of 0.1. Results show that not only the earthquakes resembling the well-known historical earthquakes such as AD 365 or AD 1303 in the Hellenic Arc, but also earthquakes with lower magnitudes do constitute to the tsunami hazard in the study area.
Non-Poissonian Distribution of Tsunami Waiting Times
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2007-12-01
Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.
Yellowstone volcano-tectonic microseismic cycles constrain models of migrating volcanic fluids
NASA Astrophysics Data System (ADS)
Massin, F.; Farrell, J.; Smith, R. B.
2011-12-01
The objective of our research is to evaluate the source properties of extensive earthquake swarms in and around the 0.64Myr Yellowstone caldera, Yellowstone National Park, that is also the locus of widespread hydrothermal activity and ground deformation. We use earthquake waveforms data to investigate seismic wave multiplets that occur within discrete earthquake sequences. Waveform cross-correlation coefficients are computed from data acquired at six high quality stations that are merged from data of identical earthquakes into multiplets. Multiplets provide important indicators on the rupture process of the distinct seismogenic structures. Our multiplet database allowed evaluation of the seismic-source chronology from 1992 to 2010. We assess the evolution of micro-earthquake triggering by evaluating the evolution of earthquake rates and magnitudes. Some striking differences appear between two kinds of seismic swarms: 1) swarms with a high rate of repeating earthquakes of more than 200 events per day, and 2) swarms with a low rate of repeating earthquakes (less than 20 events per day). The 2010 Madison Plateau, western caldera, and the 2008-2009 Yellowstone Lake, eastern caldera, earthquake swarms are two examples representing respectively cascading relaxation of a uniform stress, and an example of highly concentrated stress perturbation induced by a migrating material. The repeating earthquake pattern methodology was then used to characterize the composition of the migrating material by modelling the migration time-space pattern with a experimental thermo-physical simulations of solidification of a fluid filled propagating dike. Comparison of our results with independent GPS deformation data suggests a most-likely model of rhyolitic-granitic magma intrusion along a vertical dike outlined by the pattern of earthquakes. The magma-hydrothermal mix was modeled with a temperature of 800°C-900°C and an average volumetric injection flux between 1.5 and 5 m3/s. Our interpretation is that the Yellowstone Lake swarm was caused by magma and hydrothermal fluids migrating laterally, at 1000 m per day, from ~12 km to 2 km depth and with the pattern of earthquake nucleation from south to north. The causative magmatic fluid came within a few km but did not reach the Earth's surface because of its low density contrast with the host rock. We also used multiplets for precise earthquake relocation using the P- and S-wave three-dimensional velocity models established previously for Yellowstone. Most of the repeating earthquakes are located in the northwestern part of the caldera and in the Hebgen Lake fault system, west of the caldera, that appear as the most active multiplet generator in Yellowstone. We are also evaluating multiplets for earthquake focal mechanism determinations and magmatic source property studies. The abnormal multiplets-triggering zone around the Hebgen Lake fault system, for example is also a research focus for multiplet stress simulation and we will present results on how multiplets can be used to investigate the volcano-tectonic stress interactions between the pre existing ~ 15 My Basin and Range normal faults and the superimposed effects of the 2 Mr Yellowstone volcanism on the pre-existing structures.
NASA Astrophysics Data System (ADS)
Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.
2014-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, and several new data products, such as map and distance-based goodness of fit plots. As the number and complexity of scenarios simulated using the Broadband Platform increases, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.
NASA Astrophysics Data System (ADS)
Taymaz, Tuncay; Yolsal-Çevikbilen, Seda; Ulutaş, Ergin
2016-04-01
The finite-fault source rupture models and numerical simulations of tsunami waves generated by 28 October 2012 Queen Charlotte Islands (Mw: 7.8), and 16 September 2015 Illapel-Chile (Mw: 8.3) earthquakes are presented. These subduction zone earthquakes have reverse faulting mechanisms with small amount of strike-slip components which clearly reflect the characteristics of convergence zones. The finite-fault slip models of the 2012 Queen Charlotte and 2015 Chile earthquakes are estimated from a back-projection method that uses teleseismic P- waveforms to integrate the direct P-phase with reflected phases from structural discontinuities near the source. Non-uniform rupture models of the fault plane, which are obtained from the finite fault modeling, are used in order to describe the vertical displacement on seabed. In general, the vertical displacement of water surface was considered to be the same as ocean bottom displacement, and it is assumed to be responsible for the initial water surface deformation gives rise to occurrence of tsunami waves. In this study, it was calculated by using the elastic dislocation algorithm. The results of numerical tsunami simulations are compared with tide gauges and Deep-ocean Assessment and Reporting of Tsunami (DART) buoy records. De-tiding, de-trending, low-pass and high-pass filters were applied to detect tsunami waves in deep ocean sensors and tide gauge records. As an example, the observed records and results of simulations showed that the 2012 Queen Charlotte Islands earthquake generated about 1 meter tsunami-waves in Maui and Hilo (Hawaii), 5 hours and 30 minutes after the earthquake. Furthermore, the calculated amplitudes and time series of the tsunami waves of the recent 2015 Illapel (Chile) earthquake are exhibiting good agreement with the records of tide and DART gauges except at stations Valparaiso and Pichidangui (Chile). This project is supported by The Scientific and Technological Research Council of Turkey (TUBITAK Project No: CAYDAG-114Y066).
Development of optimization-based probabilistic earthquake scenarios for the city of Tehran
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Peyghaleh, E.
2016-01-01
This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less computation power. The authors have used this approach for risk assessment towards identification of effectiveness-profitability of risk mitigation measures, using optimization model for resource allocation. Based on the error-computation trade-off, 62-earthquake scenarios are chosen to be used for this purpose.
GPS detection of ionospheric perturbations following the January 17, 1994, northridge earthquake
NASA Technical Reports Server (NTRS)
Calais, Eric; Minster, J. Bernard
1995-01-01
Sources such as atmospheric or buried explosions and shallow earthquakes producing strong vertical ground displacements produce pressure waves that propagate at infrasonic speeds in the atmosphere. At ionospheric altitudes low frequency acoustic waves are coupled to ionispheric gravity waves and induce variations in the ionoispheric electron density. Global Positioning System (GPS) data recorded in Southern California were used to compute ionospheric electron content time series for several days preceding and following the January 17, 1994, M(sub w) = 6.7 Northridge earthquake. An anomalous signal beginning several minutes after the earthquake with time delays that increase with distance from the epicenter was observed. The signal frequency and phase velocity are consistent with results from numerical models of atmospheric-ionospheric acoustic-gravity waves excited by seismic sources as well as previous electromagnetic sounding results. It is believed that these perturbations are caused by the ionospheric response to the strong ground displacement associated with the Northridge earthquake.
Finite-fault source inversion using teleseismic P waves: Simple parameterization and rapid analysis
Mendoza, C.; Hartzell, S.
2013-01-01
We examine the ability of teleseismic P waves to provide a timely image of the rupture history for large earthquakes using a simple, 2D finite‐fault source parameterization. We analyze the broadband displacement waveforms recorded for the 2010 Mw∼7 Darfield (New Zealand) and El Mayor‐Cucapah (Baja California) earthquakes using a single planar fault with a fixed rake. Both of these earthquakes were observed to have complicated fault geometries following detailed source studies conducted by other investigators using various data types. Our kinematic, finite‐fault analysis of the events yields rupture models that similarly identify the principal areas of large coseismic slip along the fault. The results also indicate that the amount of stabilization required to spatially smooth the slip across the fault and minimize the seismic moment is related to the amplitudes of the observed P waveforms and can be estimated from the absolute values of the elements of the coefficient matrix. This empirical relationship persists for earthquakes of different magnitudes and is consistent with the stabilization constraint obtained from the L‐curve in Tikhonov regularization. We use the relation to estimate the smoothing parameters for the 2011 Mw 7.1 East Turkey, 2012 Mw 8.6 Northern Sumatra, and 2011 Mw 9.0 Tohoku, Japan, earthquakes and invert the teleseismic P waves in a single step to recover timely, preliminary slip models that identify the principal source features observed in finite‐fault solutions obtained by the U.S. Geological Survey National Earthquake Information Center (USGS/NEIC) from the analysis of body‐ and surface‐wave data. These results indicate that smoothing constraints can be estimated a priori to derive a preliminary, first‐order image of the coseismic slip using teleseismic records.
NASA Astrophysics Data System (ADS)
Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi
2018-03-01
Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.
Rapid estimate of earthquake source duration: application to tsunami warning.
NASA Astrophysics Data System (ADS)
Reymond, Dominique; Jamelot, Anthony; Hyvernaud, Olivier
2016-04-01
We present a method for estimating the source duration of the fault rupture, based on the high-frequency envelop of teleseismic P-Waves, inspired from the original work of (Ni et al., 2005). The main interest of the knowledge of this seismic parameter is to detect abnormal low velocity ruptures that are the characteristic of the so called 'tsunami-earthquake' (Kanamori, 1972). The validation of the results of source duration estimated by this method are compared with two other independent methods : the estimated duration obtained by the Wphase inversion (Kanamori and Rivera, 2008, Duputel et al., 2012) and the duration calculated by the SCARDEC process that determines the source time function (M. Vallée et al., 2011). The estimated source duration is also confronted to the slowness discriminant defined by Newman and Okal, 1998), that is calculated routinely for all earthquakes detected by our tsunami warning process (named PDFM2, Preliminary Determination of Focal Mechanism, (Clément and Reymond, 2014)). Concerning the point of view of operational tsunami warning, the numerical simulations of tsunami are deeply dependent on the source estimation: better is the source estimation, better will be the tsunami forecast. The source duration is not directly injected in the numerical simulations of tsunami, because the cinematic of the source is presently totally ignored (Jamelot and Reymond, 2015). But in the case of a tsunami-earthquake that occurs in the shallower part of the subduction zone, we have to consider a source in a medium of low rigidity modulus; consequently, for a given seismic moment, the source dimensions will be decreased while the slip distribution increased, like a 'compact' source (Okal, Hébert, 2007). Inversely, a rapid 'snappy' earthquake that has a poor tsunami excitation power, will be characterized by higher rigidity modulus, and will produce weaker displacement and lesser source dimensions than 'normal' earthquake. References: CLément, J. and Reymond, D. (2014). New Tsunami Forecast Tools for the French Polynesia Tsunami Warning System. Pure Appl. Geophys, 171. DUPUTEL, Z., RIVERA, L., KANAMORI, H. and HAYES, G. (2012). Wphase source inversion for moderate to large earthquakes. Geophys. J. Intl.189, 1125-1147. Kanamori, H. (1972). Mechanism of tsunami earthquakes. Phys. Earth Planet. Inter. 6, 246-259. Kanamori, H. and Rivera, L. (2008). Source inversion of W phase : speeding up seismic tsunami warning. Geophys. J. Intl. 175, 222-238. Newman, A. and Okal, E. (1998). Teleseismic estimates of radiated seismic energy : The E=M0 discriminant for tsunami earthquakes. J. Geophys. Res. 103, 26885-26898. Ni, S., H. Kanamori, and D. Helmberger (2005), Energy radiation from the Sumatra earthquake, Nature, 434, 582. Okal, E.A., and H. Hébert (2007), Far-field modeling of the 1946 Aleutian tsunami, Geophys. J. Intl., 169, 1229-1238. Vallée, M., J. Charléty, A.M.G. Ferreira, B. Delouis, and J. Vergoz, SCARDEC : a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body wave deconvolution, Geophys. J. Int., 184, 338-358, 2011.
NASA Astrophysics Data System (ADS)
Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène
2016-04-01
The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).
NASA Astrophysics Data System (ADS)
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
A study of regional waveform calibration in the eastern Mediterranean
NASA Astrophysics Data System (ADS)
Di Luccio, F.; Pino, N. A.; Thio, H. K.
2003-06-01
We modeled P nl phases from several moderate magnitude earthquakes in the eastern Mediterranean to test methods and develop path calibrations for determining source parameters. The study region, which extends from the eastern part of the Hellenic arc to the eastern Anatolian fault, is dominated by moderate earthquakes that can produce significant damage. Our results are useful for analyzing regional seismicity as well as seismic hazard, because very few broadband seismic stations are available in the selected area. For the whole region we have obtained a single velocity model characterized by a 30 km thick crust, low upper mantle velocities and a very thin lid overlaying a distinct low velocity layer. Our preferred model proved quite reliable for determining focal mechanism and seismic moment across the entire range of selected paths. The source depth is also well constrained, especially for moderate earthquakes.
New seismic sources parameterization in El Salvador. Implications to seismic hazard.
NASA Astrophysics Data System (ADS)
Alonso-Henar, Jorge; Staller, Alejandra; Jesús Martínez-Díaz, José; Benito, Belén; Álvarez-Gómez, José Antonio; Canora, Carolina
2014-05-01
El Salvador is located at the pacific active margin of Central America, here, the subduction of the Cocos Plate under the Caribbean Plate at a rate of ~80 mm/yr is the main seismic source. Although the seismic sources located in the Central American Volcanic Arc have been responsible for some of the most damaging earthquakes in El Salvador. The El Salvador Fault Zone is the main geological structure in El Salvador and accommodates 14 mm/yr of horizontal displacement between the Caribbean Plate and the forearc sliver. The ESFZ is a right lateral strike-slip fault zone c. 150 km long and 20 km wide .This shear band distributes the deformation among strike-slip faults trending N90º-100ºE and secondary normal faults trending N120º- N170º. The ESFZ is relieved westward by the Jalpatagua Fault and becomes less clear eastward disappearing at Golfo de Fonseca. Five sections have been proposed for the whole fault zone. These fault sections are (from west to east): ESFZ Western Section, San Vicente Section, Lempa Section, Berlin Section and San Miguel Section. Paleoseismic studies carried out in the Berlin and San Vicente Segments reveal an important amount of quaternary deformation and paleoearthquakes up to Mw 7.6. In this study we present 45 capable seismic sources in El Salvador and their preliminary slip-rate from geological and GPS data. The GPS data detailled results are presented by Staller et al., 2014 in a complimentary communication. The calculated preliminary slip-rates range from 0.5 to 8 mm/yr for individualized faults within the ESFZ. We calculated maximum magnitudes from the mapped lengths and paleoseismic observations.We propose different earthquakes scenario including the potential combined rupture of different fault sections of the ESFZ, resulting in maximum earthquake magnitudes of Mw 7.6. We used deterministic models to calculate acceleration distribution related with maximum earthquakes of the different proposed scenario. The spatial distribution of seismic accelerations are compared and calibrated using the February 13, 2001 earthquake, as control earthquake. To explore the sources of historical earthquakes we compare synthetic acceleration maps with the historical earthquakes of March 6, 1719 and June 8, 1917. control earthquake. To explore the sources of historical earthquakes we compare synthetic acceleration maps with the historical earthquakes of March 6, 1719 and June 8, 1917.
Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination
NASA Astrophysics Data System (ADS)
Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.
2008-12-01
Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.
Seismogeodesy and Rapid Earthquake and Tsunami Source Assessment
NASA Astrophysics Data System (ADS)
Melgar Moctezuma, Diego
This dissertation presents an optimal combination algorithm for strong motion seismograms and regional high rate GPS recordings. This seismogeodetic solution produces estimates of ground motion that recover the whole seismic spectrum, from the permanent deformation to the Nyquist frequency of the accelerometer. This algorithm will be demonstrated and evaluated through outdoor shake table tests and recordings of large earthquakes, notably the 2010 Mw 7.2 El Mayor-Cucapah earthquake and the 2011 Mw 9.0 Tohoku-oki events. This dissertations will also show that strong motion velocity and displacement data obtained from the seismogeodetic solution can be instrumental to quickly determine basic parameters of the earthquake source. We will show how GPS and seismogeodetic data can produce rapid estimates of centroid moment tensors, static slip inversions, and most importantly, kinematic slip inversions. Throughout the dissertation special emphasis will be placed on how to compute these source models with minimal interaction from a network operator. Finally we will show that the incorporation of off-shore data such as ocean-bottom pressure and RTK-GPS buoys can better-constrain the shallow slip of large subduction events. We will demonstrate through numerical simulations of tsunami propagation that the earthquake sources derived from the seismogeodetic and ocean-based sensors is detailed enough to provide a timely and accurate assessment of expected tsunami intensity immediately following a large earthquake.
Ground motion models used in the 2014 U.S. National Seismic Hazard Maps
Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.
2015-01-01
The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.
Imanishi, K.; Takeo, M.; Ellsworth, W.L.; Ito, H.; Matsuzawa, T.; Kuwahara, Y.; Iio, Y.; Horiuchi, S.; Ohmi, S.
2004-01-01
We use an inversion method based on stopping phases (Imanishi and Takeo, 2002) to estimate the source dimension, ellipticity, and rupture velocity of microearthquakes and investigate the scaling relationships between source parameters. We studied 25 earthquakes, ranging in size from M 1.3 to M 2.7, that occurred between May and August 1999 at the western Nagano prefecture, Japan, which is characterized by a high rate of shallow earthquakes. The data consist of seismograms recorded in an 800-m borehole and at 46 surface and 2 shallow borehole seismic stations whose spacing is a few kilometers. These data were recorded with a sampling frequency of 10 kHz. In particular, the 800-m-borehole data provide a wide frequency bandwidth with greatly reduced ground noise and coda wave amplitudes compared with surface recordings. High-frequency stopping phases appear in the body waves in Hilbert transform pairs and are readily detected on seismograms recorded in the 800-m borehole. After correcting both borehole and surface data for attenuation, we also measure the rise time, which is defined as the interval from the arrival time of the direct wave to the timing of the maximum amplitude in the displacement pulse. The differential time of the stopping phases and the rise times were used to obtain source parameters. We found that several microearthquakes propagated unilaterally, suggesting that all microearthquakes cannot be modeled as a simple circular crack model. Static stress drops range from approximately 0.1 to 2 MPa and do not vary with seismic moment. It seems that the breakdown in stress drop scaling seen in previous studies using surface data is simply an artifact of attenuation in the crust. The average value of rupture velocity does not depend on earthquake size and is similar to those reported for moderate and large earthquakes. It is likely that earthquakes are self-similar over a wide range of earthquake size and that the dynamics of small and large earthquakes are similar.
NASA Astrophysics Data System (ADS)
Stein, R. S.; Sevilgen, V.; Sevilgen, S.; Kim, A.; Jacobson, D. S.; Lotto, G. C.; Ely, G.; Bhattacharjee, G.; O'Sullivan, J.
2017-12-01
Temblor quantifies and personalizes earthquake risk and offers solutions by connecting users with qualified retrofit and insurance providers. Temblor's daily blog on current earthquakes, seismic swarms, eruptions, floods, and landslides makes the science accessible to the public. Temblor is available on iPhone, Android, and mobile web app platforms (http://temblor.net). The app presents both scenario (worst case) and probabilistic (most likely) financial losses for homes and commercial buildings, and estimates the impact of seismic retrofit and insurance on the losses and safety. Temblor's map interface has clickable earthquakes (with source parameters and links) and active faults (name, type, and slip rate) around the world, and layers for liquefaction, landslides, tsunami inundation, and flood zones in the U.S. The app draws from the 2014 USGS National Seismic Hazard Model and the 2014 USGS Building Seismic Safety Council ShakeMap scenari0 database. The Global Earthquake Activity Rate (GEAR) model is used worldwide, with active faults displayed in 75 countries. The Temblor real-time global catalog is merged from global and national catalogs, with aftershocks discriminated from mainshocks. Earthquake notifications are issued to Temblor users within 30 seconds of their occurrence, with approximate locations and magnitudes that are rapidly refined in the ensuing minutes. Launched in 2015, Temblor has 650,000 unique users, including 250,000 in the U.S. and 110,000 in Chile, as well as 52,000 Facebook followers. All data shown in Temblor is gathered from authoritative or published sources and is synthesized to be intuitive and actionable to the public. Principal data sources include USGS, FEMA, EMSC, GEM Foundation, NOAA, GNS Science (New Zealand), INGV (Italy), PHIVOLCS (Philippines), GSJ (Japan), Taiwan Earthquake Model, EOS Singapore (Southeast Asia), MTA (Turkey), PB2003 (plate boundaries), CICESE (Baja California), California Geological Survey, and 20 other state geological surveys and county agencies.
NASA Astrophysics Data System (ADS)
Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi
2018-06-01
Seismic wave propagation from shallow subduction-zone earthquakes can be strongly affected by 3D heterogeneous structures, such as oceanic water and sedimentary layers with irregular thicknesses. Synthetic waveforms must incorporate these effects so that they reproduce the characteristics of the observed waveforms properly. In this paper, we evaluate the accuracy of synthetic waveforms for small earthquakes in the source area of the 2011 Tohoku-Oki earthquake ( M JMA 9.0) at the Japan Trench. We compute the synthetic waveforms on the basis of a land-ocean unified 3D structure model using our heterogeneity, oceanic layer, and topography finite-difference method. In estimating the source parameters, we apply the first-motion augmented moment tensor (FAMT) method that we have recently proposed to minimize biases due to inappropriate source parameters. We find that, among several estimates, only the FAMT solutions are located very near the plate interface, which demonstrates the importance of using a 3D model for ensuring the self-consistency of the structure model, source position, and source mechanisms. Using several different filter passbands, we find that the full waveforms with periods longer than about 10 s can be reproduced well, while the degree of waveform fitting becomes worse for periods shorter than about 10 s. At periods around 4 s, the initial body waveforms can be modeled, but the later large-amplitude surface waves are difficult to reproduce correctly. The degree of waveform fitting depends on the source location, with better fittings for deep sources near land. We further examine the 3D sensitivity kernels: for the period of 12.8 s, the kernel shows a symmetric pattern with respect to the straight path between the source and the station, while for the period of 6.1 s, a curved pattern is obtained. Also, the range of the sensitive area becomes shallower for the latter case. Such a 3D spatial pattern cannot be predicted by 1D Earth models and indicates the strong effects of 3D heterogeneity on short-period ( ≲ 10s) waveforms. Thus, it would be necessary to consider such 3D effects when improving the structure and source models.
Inducing in situ, nonlinear soil response applying an active source
Johnson, P.A.; Bodin, P.; Gomberg, J.; Pearce, F.; Lawrence, Z.; Menq, F.-Y.
2009-01-01
[1] It is well known that soil sites have a profound effect on ground motion during large earthquakes. The complex structure of soil deposits and the highly nonlinear constitutive behavior of soils largely control nonlinear site response at soil sites. Measurements of nonlinear soil response under natural conditions are critical to advancing our understanding of soil behavior during earthquakes. Many factors limit the use of earthquake observations to estimate nonlinear site response such that quantitative characterization of nonlinear behavior relies almost exclusively on laboratory experiments and modeling of wave propagation. Here we introduce a new method for in situ characterization of the nonlinear behavior of a natural soil formation using measurements obtained immediately adjacent to a large vibrator source. To our knowledge, we are the first group to propose and test such an approach. Employing a large, surface vibrator as a source, we measure the nonlinear behavior of the soil by incrementally increasing the source amplitude over a range of frequencies and monitoring changes in the output spectra. We apply a homodyne algorithm for measuring spectral amplitudes, which provides robust signal-to-noise ratios at the frequencies of interest. Spectral ratios are computed between the receivers and the source as well as receiver pairs located in an array adjacent to the source, providing the means to separate source and near-source nonlinearity from pervasive nonlinearity in the soil column. We find clear evidence of nonlinearity in significant decreases in the frequency of peak spectral ratios, corresponding to material softening with amplitude, observed across the array as the source amplitude is increased. The observed peak shifts are consistent with laboratory measurements of soil nonlinearity. Our results provide constraints for future numerical modeling studies of strong ground motion during earthquakes.
NASA Astrophysics Data System (ADS)
Marc, O.; Hovius, N.; Meunier, P.; Rault, C.
2017-12-01
In tectonically active areas, earthquakes are an important trigger of landslides with significant impact on hillslopes and river evolutions. However, detailed prediction of landslides locations and properties for a given earthquakes remain difficult.In contrast we propose, landscape scale, analytical prediction of bulk coseismic landsliding, that is total landslide area and volume (Marc et al., 2016a) as well as the regional area within which most landslide must distribute (Marc et al., 2017). The prediction is based on a limited number of seismological (seismic moment, source depth) and geomorphological (landscape steepness, threshold acceleration) parameters, and therefore could be implemented in landscape evolution model aiming at engaging with erosion dynamics at the scale of the seismic cycle. To assess the model we have compiled and normalized estimates of total landslide volume, total landslide area and regional area affected by landslides for 40, 17 and 83 earthquakes, respectively. We have found that low landscape steepness systematically leads to overprediction of the total area and volume of landslides. When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about 70% of the cases in our databases. The prediction of regional area affected do not require a calibration for the landscape steepness and gives a prediction within a factor of 2 for 60% of the database. For 7 out of 10 comprehensive inventories we show that our prediction compares well with the smallest region around the fault containing 95% of the total landslide area. This is a significant improvement on a previously published empirical expression based only on earthquake moment.Some of the outliers seems related to exceptional rock mass strength in the epicentral area or shaking duration and other seismic source complexities ignored by the model. Applications include prediction on the mass balance of earthquakes and this model predicts that only earthquakes generated on a narrow range of fault sizes may cause more erosion than uplift (Marc et al., 2016b), while very large earthquakes are expected to always build topography. The model could also be used to physically calibrate hillslope erosion or perturbations to river network within landscape evolution model.
Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling
NASA Astrophysics Data System (ADS)
Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea
2017-12-01
The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.
NASA Astrophysics Data System (ADS)
Monsalve-Jaramillo, Hugo; Valencia-Mina, William; Cano-Saldaña, Leonardo; Vargas, Carlos A.
2018-05-01
Source parameters of four earthquakes located within the Wadati-Benioff zone of the Nazca plate subducting beneath the South American plate in Colombia were determined. The seismic moments for these events were recalculated and their approximate equivalent rupture area, slip distribution and stress drop were estimated. The source parameters for these earthquakes were obtained by deconvolving multiple events through teleseismic analysis of body waves recorded in long period stations and with simultaneous inversion of P and SH waves. The calculated source time functions for these events showed different stages that suggest that these earthquakes can reasonably be thought of being composed of two subevents. Even though two of the overall focal mechanisms obtained yielded similar results to those reported by the CMT catalogue, the two other mechanisms showed a clear difference compared to those officially reported. Despite this, it appropriate to mention that the mechanisms inverted in this work agree well with the expected orientation of faulting at that depth as well as with the wave forms they are expected to produce. In some of the solutions achieved, one of the two subevents exhibited a focal mechanism considerably different from the total earthquake mechanism; this could be interpreted as the result of a slight deviation from the overall motion due the complex stress field as well as the possibility of a combination of different sources of energy release analogous to the ones that may occur in deeper earthquakes. In those cases, the subevents with very different focal mechanism compared to the total earthquake mechanism had little contribution to the final solution and thus little contribution to the total amount of energy released.
NASA Astrophysics Data System (ADS)
Kaneko, Y.; Francois-Holden, C.; Hamling, I. J.; D'Anastasio, E.; Fry, B.
2017-12-01
The 2016 M7.8 Kaikōura (New Zealand) earthquake generated ground motions over 1g across a 200-km long region, resulted in multiple onshore and offshore fault ruptures, a profusion of triggered landslides, and a regional tsunami. Here we examine the rupture evolution during the Kaikōura earthquake multiple kinematic modelling methods based on local strong-motion and high-rate GPS data. Our kinematic models constrained by near-source data capture, in detail, a complex pattern of slowly (Vr < 2km/s) propagating rupture from the south to north, with over half of the moment release occurring in the northern source region, mostly on the Kekerengu fault, 60 seconds after the origin time. Interestingly, both models indicate rupture re-activation on the Kekerengu fault with the time separation of 11 seconds. We further conclude that most near-source waveforms can be explained by slip on the crustal faults, with little (<8%) or no contribution from the subduction interface.
NASA Astrophysics Data System (ADS)
Lal, Sohan; Joshi, A.; Sandeep; Tomer, Monu; Kumar, Parveen; Kuo, Chun-Hsiang; Lin, Che-Min; Wen, Kuo-Liang; Sharma, M. L.
2018-05-01
On 25th April, 2015 a hazardous earthquake of moment magnitude 7.9 occurred in Nepal. Accelerographs were used to record the Nepal earthquake which is installed in the Kumaon region in the Himalayan state of Uttrakhand. The distance of the recorded stations in the Kumaon region from the epicenter of the earthquake is about 420-515 km. Modified semi-empirical technique of modeling finite faults has been used in this paper to simulate strong earthquake at these stations. Source parameters of the Nepal aftershock have been also calculated using the Brune model in the present study which are used in the modeling of the Nepal main shock. The obtained value of the seismic moment and stress drop is 8.26 × 1025 dyn cm and 10.48 bar, respectively, for the aftershock from the Brune model .The simulated earthquake time series were compared with the observed records of the earthquake. The comparison of full waveform and its response spectra has been made to finalize the rupture parameters and its location. The rupture of the earthquake was propagated in the NE-SW direction from the hypocenter with the rupture velocity 3.0 km/s from a distance of 80 km from Kathmandu in NW direction at a depth of 12 km as per compared results.
NASA Astrophysics Data System (ADS)
Ichinose, G. A.; Saikia, C. K.
2007-12-01
We applied the moment tensor (MT) analysis scheme to identify seismic sources using regional seismograms based on the representation theorem for the elastic wave displacement field. This method is applied to estimate the isotropic (ISO) and deviatoric MT components of earthquake, volcanic, and isotropic sources within the Basin and Range Province (BRP) and western US. The ISO components from Hoya, Bexar, Montello and Junction were compared to recently well recorded recent earthquakes near Little Skull Mountain, Scotty's Junction, Eureka Valley, and Fish Lake Valley within southern Nevada. We also examined "dilatational" sources near Mammoth Lakes Caldera and two mine collapses including the August 2007 event in Utah recorded by US Array. Using our formulation we have first implemented the full MT inversion method on long period filtered regional data. We also applied a grid-search technique to solve for the percent deviatoric and %ISO moments. By using the grid-search technique, high-frequency waveforms are used with calibrated velocity models. We modeled the ISO and deviatoric components (spall and tectonic release) as separate events delayed in time or offset in space. Calibrated velocity models helped the resolution of the ISO components and decrease the variance over the average, initial or background velocity models. The centroid location and time shifts are velocity model dependent. Models can be improved as was done in previously published work in which we used an iterative waveform inversion method with regional seismograms from four well recorded and constrained earthquakes. The resulting velocity models reduced the variance between predicted synthetics by about 50 to 80% for frequencies up to 0.5 Hz. Tests indicate that the individual path-specific models perform better at recovering the earthquake MT solutions even after using a sparser distribution of stations than the average or initial models.
NASA Astrophysics Data System (ADS)
Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.
2017-05-01
The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.
Observed ground-motion variabilities and implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, F.; Bora, S. S.; Bindi, D.; Specht, S.; Drouet, S.; Derras, B.; Pina-Valdes, J.
2016-12-01
One of the key challenges of seismology is to be able to calibrate and analyse the physical factors that control earthquake and ground-motion variabilities. Within the framework of empirical ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-field records and modern regression algorithms allow to decompose these residuals into between-event and a within-event residual components. The between-event term quantify all the residual effects of the source (e.g. stress-drops) which are not accounted by magnitude term as the only source parameter of the model. Between-event residuals provide a new and rather robust way to analyse the physical factors that control earthquake source properties and associated variabilities. We first will show the correlation between classical stress-drops and between-event residuals. We will also explain why between-event residuals may be a more robust way (compared to classical stress-drop analysis) to analyse earthquake source-properties. We will finally calibrate between-events variabilities using recent high-quality global accelerometric datasets (NGA-West 2, RESORCE) and datasets from recent earthquakes sequences (Aquila, Iquique, Kunamoto). The obtained between-events variabilities will be used to evaluate the variability of earthquake stress-drops but also the variability of source properties which cannot be explained by a classical Brune stress-drop variations. We will finally use the between-event residual analysis to discuss regional variations of source properties, differences between aftershocks and mainshocks and potential magnitude dependencies of source characteristics.
Michael, Andrew J.
2012-01-01
Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.
Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting
NASA Technical Reports Server (NTRS)
Bergman, Eric A.; Solomon, Sean C.
1987-01-01
The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.
NASA Astrophysics Data System (ADS)
Cocco, M.; Feuillet, N.; Nostro, C.; Musumeci, C.
2003-04-01
We investigate the mechanical interactions between tectonic faults and volcanic sources through elastic stress transfer and discuss the results of several applications to Italian active volcanoes. We first present the stress modeling results that point out a two-way coupling between Vesuvius eruptions and historical earthquakes in Southern Apennines, which allow us to provide a physical interpretation of their statistical correlation. Therefore, we explore the elastic stress interaction between historical eruptions at the Etna volcano and the largest earthquakes in Eastern Sicily and Calabria. We show that the large 1693 seismic event caused an increase of compressive stress along the rift zone, which can be associated to the lack of flank eruptions of the Etna volcano for about 70 years after the earthquake. Moreover, the largest Etna eruptions preceded by few decades the large 1693 seismic event. Our modeling results clearly suggest that all these catastrophic events are tectonically coupled. We also investigate the effect of elastic stress perturbations on the instrumental seismicity caused by magma inflation at depth both at the Etna and at the Alban Hills volcanoes. In particular, we model the seismicity pattern at the Alban Hills volcano (central Italy) during a seismic swarm occurred in 1989-90 and we interpret it in terms of Coulomb stress changes caused by magmatic processes in an extensional tectonic stress field. We verify that the earthquakes occur in areas of Coulomb stress increase and that their faulting mechanisms are consistent with the stress perturbation induced by the volcanic source. Our results suggest a link between faults and volcanic sources, which we interpret as a tectonic coupling explaining the seismicity in a large area surrounding the volcanoes.
Comprehensive Areal Model of Earthquake-Induced Landslides: Technical Specification and User Guide
Miles, Scott B.; Keefer, David K.
2007-01-01
This report describes the complete design of a comprehensive areal model of earthquakeinduced landslides (CAMEL). This report presents the design process, technical specification of CAMEL. It also provides a guide to using the CAMEL source code and template ESRI ArcGIS map document file for applying CAMEL, both of which can be obtained by contacting the authors. CAMEL is a regional-scale model of earthquake-induced landslide hazard developed using fuzzy logic systems. CAMEL currently estimates areal landslide concentration (number of landslides per square kilometer) of six aggregated types of earthquake-induced landslides - three types each for rock and soil.
NASA Astrophysics Data System (ADS)
Tonini, R.; Maesano, F. E.; Tiberti, M. M.; Romano, F.; Scala, A.; Lorito, S.; Volpe, M.; Basili, R.
2017-12-01
The geometry of seismogenic sources could be one of the most important factors concurring to control the generation and the propagation of earthquake-generated tsunamis and their effects on the coasts. Since the majority of potentially tsunamigenic earthquakes occur offshore, the corresponding faults are generally poorly constrained and, consequently, their geometry is often oversimplified as a planar fault. The rupture area of mega-thrust earthquakes in subduction zones, where most of the greatest tsunamis have occurred, extends for tens to hundreds of kilometers both down dip and along strike, and generally deviates from the planar geometry. Therefore, the larger the earthquake size is, the weaker the planar fault assumption become. In this work, we present a sensitivity analysis aimed to explore the effects on modeled tsunamis generated by seismic sources with different degrees of geometric complexities. We focused on the Calabrian subduction zone, located in the Mediterranean Sea, which is characterized by the convergence between the African and European plates, with rates of up to 5 mm/yr. This subduction zone has been considered to have generated some past large earthquakes and tsunamis, despite it shows only in-slab significant seismic activity below 40 km depth and no relevant seismicity in the shallower portion of the interface. Our analysis is performed by defining and modeling an exhaustive set of tsunami scenarios located in the Calabrian subduction and using different models of the subduction interface with increasing geometrical complexity, from a planar surface to a highly detailed 3D surface. The latter was obtained from the interpretation of a dense network of seismic reflection profiles coupled with the analysis of the seismicity distribution. The more relevant effects due to the inclusion of 3D complexities in the seismic source geometry are finally highlighted in terms of the resulting tsunami impact.
Tectonic evolution of the Mexico flat slab and patterns of intraslab seismicity.
NASA Astrophysics Data System (ADS)
Moresi, L. N.; Sandiford, D.
2017-12-01
The Cocos plate slab is horizontal for about 250 km beneath the Guerrero region of southern Mexico. Analogous morphologies can spontaneously develop in subduction models, through the presence of a low-viscosity mantle wedge. The Mw 7.1 Puebla earthquake appears to have ruptured the inboard corner of the Mexican flat slab; likely in close proximity to the mantle wedge corner. In addition to the historical seismic record, the Puebla earthquake provides a valuable constraint through which to assess geodynamic models for flat slab evolution. Slab deformation predicted by the "weak wedge" model is consistent with past seismicity in the both the upper plate and slab. Below the flat section, the slab is anomalously warm relative to its depth; the lack of seismicity in the deeper part of the slab fits the global pattern of temperature-controlled slab seismicity. This has implications for understanding the deeper structure of the slab, including the seismic hazard from source regions downdip of the Puebla rupture (epicenters closer to Mexico City). While historical seismicity provides a deformation pattern consistent with the weak wedge model , the Puebla earthquake is somewhat anomalous. The earthquake source mechanism is consistent with stress orientations in our models, however it maps to a region of relatively low deviatoric stress.
NASA Astrophysics Data System (ADS)
Nagasaka, Yosuke; Nozu, Atsushi
2017-02-01
The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Kuriyama, M.; Kumamoto, T.; Fujita, M.
2005-12-01
The 1995 Hyogo-ken Nambu Earthquake (1995) near Kobe, Japan, spurred research on strong motion prediction. To mitigate damage caused by large earthquakes, a highly precise method of predicting future strong motion waveforms is required. In this study, we applied empirical Green's function method to forward modeling in order to simulate strong ground motion in the Noubi Fault zone and examine issues related to strong motion prediction for large faults. Source models for the scenario earthquakes were constructed using the recipe of strong motion prediction (Irikura and Miyake, 2001; Irikura et al., 2003). To calculate the asperity area ratio of a large fault zone, the results of a scaling model, a scaling model with 22% asperity by area, and a cascade model were compared, and several rupture points and segmentation parameters were examined for certain cases. A small earthquake (Mw: 4.6) that occurred in northern Fukui Prefecture in 2004 were examined as empirical Green's function, and the source spectrum of this small event was found to agree with the omega-square scaling law. The Nukumi, Neodani, and Umehara segments of the 1891 Noubi Earthquake were targeted in the present study. The positions of the asperity area and rupture starting points were based on the horizontal displacement distributions reported by Matsuda (1974) and the fault branching pattern and rupture direction model proposed by Nakata and Goto (1998). Asymmetry in the damage maps for the Noubi Earthquake was then examined. We compared the maximum horizontal velocities for each case that had a different rupture starting point. In the case, rupture started at the center of the Nukumi Fault, while in another case, rupture started on the southeastern edge of the Umehara Fault; the scaling model showed an approximately 2.1-fold difference between these cases at observation point FKI005 of K-Net. This difference is considered to relate to the directivity effect associated with the direction of rupture propagation. Moreover, it was clarified that the horizontal velocities by assuming the cascade model was underestimated more than one standard deviation of empirical relation by Si and Midorikawa (1999). The scaling and cascade models showed an approximately 6.4-fold difference for the case, in which the rupture started along the southeastern edge of the Umehara Fault at observation point GIF020. This difference is significantly large in comparison with the effect of different rupture starting points, and shows that it is important to base scenario earthquake assumptions on active fault datasets before establishing the source characterization model. The distribution map of seismic intensity for the 1891 Noubi Earthquake also suggests that the synthetic waveforms in the southeastern Noubi Fault zone may be underestimated. Our results indicate that outer fault parameters (e.g., earthquake moment) related to the construction of scenario earthquakes influence strong motion prediction, rather than inner fault parameters such as the rupture starting point. Based on these methods, we will predict strong motion for approximately 140 to 150 km of the Itoigawa-Shizuoka Tectonic Line.
Mechanism of the 2015 volcanic tsunami earthquake near Torishima, Japan
Satake, Kenji
2018-01-01
Tsunami earthquakes are a group of enigmatic earthquakes generating disproportionally large tsunamis relative to seismic magnitude. These events occur most typically near deep-sea trenches. Tsunami earthquakes occurring approximately every 10 years near Torishima on the Izu-Bonin arc are another example. Seismic and tsunami waves from the 2015 event [Mw (moment magnitude) = 5.7] were recorded by an offshore seafloor array of 10 pressure gauges, ~100 km away from the epicenter. We made an array analysis of dispersive tsunamis to locate the tsunami source within the submarine Smith Caldera. The tsunami simulation from a large caldera-floor uplift of ~1.5 m with a small peripheral depression yielded waveforms remarkably similar to the observations. The estimated central uplift, 1.5 m, is ~20 times larger than that inferred from the seismologically determined non–double-couple source. Thus, the tsunami observation is not compatible with the published seismic source model taken at face value. However, given the indeterminacy of Mzx, Mzy, and M{tensile} of a shallow moment tensor source, it may be possible to find a source mechanism with efficient tsunami but inefficient seismic radiation that can satisfactorily explain both the tsunami and seismic observations, but this question remains unresolved. PMID:29740604
Mechanism of the 2015 volcanic tsunami earthquake near Torishima, Japan.
Fukao, Yoshio; Sandanbata, Osamu; Sugioka, Hiroko; Ito, Aki; Shiobara, Hajime; Watada, Shingo; Satake, Kenji
2018-04-01
Tsunami earthquakes are a group of enigmatic earthquakes generating disproportionally large tsunamis relative to seismic magnitude. These events occur most typically near deep-sea trenches. Tsunami earthquakes occurring approximately every 10 years near Torishima on the Izu-Bonin arc are another example. Seismic and tsunami waves from the 2015 event [ M w (moment magnitude) = 5.7] were recorded by an offshore seafloor array of 10 pressure gauges, ~100 km away from the epicenter. We made an array analysis of dispersive tsunamis to locate the tsunami source within the submarine Smith Caldera. The tsunami simulation from a large caldera-floor uplift of ~1.5 m with a small peripheral depression yielded waveforms remarkably similar to the observations. The estimated central uplift, 1.5 m, is ~20 times larger than that inferred from the seismologically determined non-double-couple source. Thus, the tsunami observation is not compatible with the published seismic source model taken at face value. However, given the indeterminacy of M zx , M zy , and M {tensile} of a shallow moment tensor source, it may be possible to find a source mechanism with efficient tsunami but inefficient seismic radiation that can satisfactorily explain both the tsunami and seismic observations, but this question remains unresolved.
NASA Astrophysics Data System (ADS)
Zettergren, M. D.; Snively, J. B.; Inchin, P.; Komjathy, A.; Verkhoglyadova, O. P.
2017-12-01
Ocean and solid earth responses during earthquakes are a significant source of large amplitude acoustic and gravity waves (AGWs) that perturb the overlying ionosphere-thermosphere (IT) system. IT disturbances are routinely detected following large earthquakes (M > 7.0) via GPS total electron content (TEC) observations, which often show acoustic wave ( 3-4 min periods) and gravity wave ( 10-15 min) signatures with amplitudes of 0.05-2 TECU. In cases of very large earthquakes (M > 8.0) the persisting acoustic waves are estimated to have 100-200 m/s compressional velocities in the conducting ionospheric E and F-regions and should generate significant dynamo currents and magnetic field signatures. Indeed, some recent reports (e.g. Hao et al, 2013, JGR, 118, 6) show evidence for magnetic fluctuations, which appear to be related to AGWs, following recent large earthquakes. However, very little quantitative information is available on: (1) the detailed spatial and temporal dependence of these magnetic fluctuations, which are usually observed at a small number of irregularly arranged stations, and (2) the relation of these signatures to TEC perturbations in terms of relative amplitudes, frequency, and timing for different events. This work investigates space- and time-dependent behavior of both TEC and magnetic fluctuations following recent large earthquakes, with the aim to improve physical understanding of these perturbations via detailed, high-resolution, two- and three-dimensional modeling case studies with a coupled neutral atmospheric and ionospheric model, MAGIC-GEMINI (Zettergren and Snively, 2015, JGR, 120, 9). We focus on cases inspired by the large Chilean earthquakes from the past decade (viz., the M > 8.0 earthquakes from 2010 and 2015) to constrain the sources for the model, i.e. size, frequency, amplitude, and timing, based on available information from ocean buoy and seismometer data. TEC data are used to validate source amplitudes and to constrain background ionospheric conditions. Preliminary comparisons against available magnetic field and TEC data from these events provide evidence, albeit limited and localized, that support the validity of the spatially-resolved simulation results.
NASA Astrophysics Data System (ADS)
Rodgers, A. J.; Pitarka, A.; Petersson, N. A.; Sjogreen, B.; McCallen, D.; Miah, M.
2016-12-01
Simulation of earthquake ground motions is becoming more widely used due to improvements of numerical methods, development of ever more efficient computer programs (codes), and growth in and access to High-Performance Computing (HPC). We report on how SW4 can be used for accurate and efficient simulations of earthquake strong motions. SW4 is an anelastic finite difference code based on a fourth order summation-by-parts displacement formulation. It is parallelized and can run on one or many processors. SW4 has many desirable features for seismic strong motion simulation: incorporation of surface topography; automatic mesh generation; mesh refinement; attenuation and supergrid boundary conditions. It also has several ways to introduce 3D models and sources (including Standard Rupture Format for extended sources). We are using SW4 to simulate strong ground motions for several applications. We are performing parametric studies of near-fault motions from moderate earthquakes to investigate basin edge generated waves and large earthquakes to provide motions to engineers study building response. We show that 3D propagation near basin edges can generate significant amplifications relative to 1D analysis. SW4 is also being used to model earthquakes in the San Francisco Bay Area. This includes modeling moderate (M3.5-5) events to evaluate the United States Geologic Survey's 3D model of regional structure as well as strong motions from the 2014 South Napa earthquake and possible large scenario events. Recently SW4 was built on a Commodity Technology Systems-1 (CTS-1) at LLNL, new systems for capacity computing at the DOE National Labs. We find SW4 scales well and runs faster on these systems compared to the previous generation of LINUX clusters.
Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data
NASA Astrophysics Data System (ADS)
Funning, G. J.; Cockett, R.
2012-12-01
InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median models. We envisage that the ensemble of contributed models will be useful both as a research resource and in the classroom. Locations of earthquakes derived from InSAR data have already been demonstrated to differ significantly from those obtained from global seismic networks (Weston et al., 2011), and the locations obtained by our users will enable us to identify systematic mislocations that are likely due to errors in Earth velocity models used to locate earthquakes. If the tool is incorporated into geophysics, tectonics and/or structural geology classes, in addition to familiarizing students with InSAR and elastic deformation modeling, the spread of different results for each individual earthquake will allow the teaching of concepts such as model uncertainty and non-uniqueness when modeling real scientific data. Additionally, the process students go through to optimize their estimates of fault parameters can easily be tied into teaching about the concepts of forward and inverse problems, which are common in geophysics.
Source spectral properties of small-to-moderate earthquakes in southern Kansas
Trugman, Daniel T.; Dougherty, Sara L.; Cochran, Elizabeth S.; Shearer, Peter M.
2017-01-01
The source spectral properties of injection-induced earthquakes give insight into their nucleation, rupture processes, and influence on ground motion. Here we apply a spectral decomposition approach to analyze P-wave spectra and estimate Brune-type stress drop for more than 2000 ML1.5–5.2 earthquakes occurring in southern Kansas from 2014 to 2016. We find that these earthquakes are characterized by low stress drop values (median ∼0.4MPa) compared to natural seismicity in California. We observe a significant increase in stress drop as a function of depth, but the shallow depth distribution of these events is not by itself sufficient to explain their lower stress drop. Stress drop increases with magnitude from M1.5–M3.5, but this scaling trend may weaken above M4 and also depends on the assumed source model. Although we observe a nonstationary, sequence-specific temporal evolution in stress drop, we find no clear systematic relation with the activity of nearby injection wells.
Earthquake Forecasting System in Italy
NASA Astrophysics Data System (ADS)
Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.
2017-12-01
In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).
Seismological investigation of September 09 2016, North Korea underground nuclear test
NASA Astrophysics Data System (ADS)
Gaber, H.; Elkholy, S.; Abdelazim, M.; Hamama, I. H.; Othman, A. S.
2017-12-01
On Sep. 9, 2016, a seismic event of mb 5.3 took place in North Korea. This event was reported as a nuclear test. In this study, we applied a number of discriminant techniques that facilitate the ability to distinguish between explosions and earthquakes on the Korean Peninsula. The differences between explosions and earthquakes are due to variation in source dimension, epicenter depth and source mechanism, or a collection of them. There are many seismological differences between nuclear explosions and earthquakes, but not all of them are detectable at large distances or are appropriate to each earthquake and explosion. The discrimination methods used in the current study include the seismic source location, source depth, the differences in the frequency contents, complexity versus spectral ratio and Ms-mb differences for both earthquakes and explosions. Sep. 9, 2016, event is located in the region of North Korea nuclear test site at a zero depth, which is likely to be a nuclear explosion. Comparison between the P wave spectra of the nuclear test and the Sep. 8, 2000, North Korea earthquake, mb 4.9 shows that the spectrum of both events is nearly the same. The results of applying the theoretical model of Brune to P wave spectra of both explosion and earthquake show that the explosion manifests larger corner frequency than the earthquake, reflecting the nature of the different sources. The complexity and spectral ratio were also calculated from the waveform data recorded at a number of stations in order to investigate the relation between them. The observed classification percentage of this method is about 81%. Finally, the mb:Ms method is also investigated. We calculate mb and Ms for the Sep. 9, 2016, explosion and compare the result with the mb: Ms chart obtained from the previous studies. This method is working well with the explosion.
Defining the Relationship between Seismicity and Deformation at Regional and Local Scales
NASA Astrophysics Data System (ADS)
Williams, Nneka Njeri Akosua
In this thesis, I use source inversion methods to improve understanding of crustal deformation along the Nyainquentanglha (NQTL) Detachment in Southern Tibet and the Piceance Basin in northwestern Colorado. Broadband station coverage in both regions is sparse, necessitating the development of innovative approaches to source inversion for the purpose of studying local earthquakes. In an effort to study the 2002-2003 earthquake swarm and the 2008 M w 6.3 Damxung earthquake and aftershocks that occurred in the NQTL region, we developed a single station earthquake location inversion method called the SP Envelope method, to be used with data from LHSA at Lhasa, a broadband seismometer located 75 km away. A location is calculated by first rotating the seismogram until the azimuth at which the envelope of the P-wave arrival on the T-component is smallest (its great circle path) is found. The distance at which to place the location along this azimuth is measured by calculating the S-P distance from arrivals on the seismogram. When used in conjunction with an existing waveform modeling based source inversion method called Cut and Paste (CAP), a catalog of 40 regional earthquakes was generated. From these 40 earthquakes, a catalog of 30 earthquakes with the most certain locations was generated to study the relationship of seismicity and NQTL region faults mapped in Google Earth™ and in Armijo et al., 1986 and Kapp et al., 2005. Using these faults and focal mechanisms, a fault model of the NQTL Region was generated using GOCAD, a 3D modeling suite. By studying the relationship of modeled faults to mapped fault traces at the surface, the most likely fault slip plane was chosen. These fault planes were then used to calculate slip vectors and a regional bulk stress tensor, with respect to which the low-angle NQTL Detachment was found to be badly misoriented. The formation of low-angle normal faults is inconsistent with the Anderson Theory of faulting, and the presence of the NQTL Detachment in a region with such an incongruous stress field supports the notion that such faults are real. The timing and locations of the earthquakes in this catalog with respect to an anomalous increase in the eastward component of velocity readings at the single cGPS station in Lhasa (LHAS) were analyzed to determine the relationship between plastic and brittle deformation in the region. The fact that cGPS velocities slow significantly after the 2002-2003 earthquake swarm suggests that this motion is tectonic in nature, and it has been interpreted as only the second continental slow slip event (SSE) ever to be observed. The observation of slow slip followed by an earthquake swarm within a Tibetan rift suggests that other swarms observed within similar rifts in the region are related to SSEs. In the Piceance Basin, CAP was used to determine source mechanisms of microearthquakes triggered as a result of fracture stimulation within a tight gas reservoir. The expense of drilling monitor wells and installing borehole geophones reduces the azimuthal station coverage, thus making it difficult to determine source mechanisms of microearthquakes using more traditional methods. For high signal to noise ratio records, CAP produced results on par with those obtained in studies of regional earthquakes. This finding suggests that CAP could successfully be applied in studies of microseismicity when data quality is high.
NASA Astrophysics Data System (ADS)
Liu, Bo-Yan; Shi, Bao-Ping; Zhang, Jian
2007-05-01
In this study, a composite source model has been used to calculate the realistic strong ground motions in Beijing area, caused by 1679 M S8.0 earthquake in Sanhe-Pinggu. The results could provide us the useful physical parameters for the future seismic hazard analysis in this area. Considering the regional geological/geophysical background, we simulated the scenario earthquake with an associated ground motions in the area ranging from 39.3°N to 41.1°N in latitude and from 115.35°E to 117.55°E in longitude. Some of the key factors which could influence the characteristics of strong ground motion have been discussed, and the resultant peak ground acceleration (PGA) distribution and the peak ground velocity (PGV) distribution around Beijing area also have been made as well. A comparison of the simulated result with the results derived from the attenuation relation has been made, and a sufficient discussion about the advantages and disadvantages of composite source model also has been given in this study. The numerical results, such as the PGA, PGV, peak ground displacement (PGD), and the three-component time-histories developed for Beijing area, have a potential application in earthquake engineering field and building code design, especially for the evaluation of critical constructions, government decision making and the seismic hazard assessment by financial/insurance companies.
Inverse and Forward Modeling of The 2014 Iquique Earthquake with Run-up Data
NASA Astrophysics Data System (ADS)
Fuentes, M.
2015-12-01
The April 1, 2014 Mw 8.2 Iquique earthquake excited a moderate tsunami which turned on the national alert of tsunami threat. This earthquake was located in the well-known seismic gap in northern Chile which had a high seismic potential (~ Mw 9.0) after the two main large historic events of 1868 and 1877. Nonetheless, studies of the seismic source performed with seismic data inversions suggest that the event exhibited a main patch located around 19.8° S at 40 km of depth with a seismic moment equivalent to Mw = 8.2. Thus, a large seismic deficit remains in the gap being capable to release an event of Mw = 8.8-8.9. To understand the importance of the tsunami threat in this zone, a seismic source modeling of the Iquique Earthquake is performed. A new approach based on stochastic k2 seismic sources is presented. A set of those sources is generated and for each one, a full numerical tsunami model is performed in order to obtain the run-up heights along the coastline. The results are compared with the available field run-up measurements and with the tide gauges that registered the signal. The comparison is not uniform; it penalizes more when the discrepancies are larger close to the peak run-up location. This criterion allows to identify the best seismic source from the set of scenarios that explains better the observations from a statistical point of view. By the other hand, a L2 norm minimization is used to invert the seismic source by comparing the peak nearshore tsunami amplitude (PNTA) with the run-up observations. This method searches in a space of solutions the best seismic configuration by retrieving the Green's function coefficients in order to explain the field measurements. The results obtained confirm that a concentrated down-dip patch slip adequately models the run-up data.
Seismic hazard analysis with PSHA method in four cities in Java.
NASA Astrophysics Data System (ADS)
Elistyawati, Y.; Palupi, I. R.; Suharsono
2016-11-01
In this study the tectonic earthquakes was observed through the peak ground acceleration through the PSHA method by dividing the area of the earthquake source. This study applied the earthquake data from 1965 - 2015 that has been analyzed the completeness of the data, location research was the entire Java with stressed in four large cities prone to earthquakes. The results were found to be a hazard map with a return period of 500 years, 2500 years return period, and the hazard curve were four major cities (Jakarta, Bandung, Yogyakarta, and the city of Banyuwangi). Results Java PGA hazard map 500 years had a peak ground acceleration within 0 g ≥ 0.5 g, while the return period of 2500 years had a value of 0 to ≥ 0.8 g. While, the PGA hazard curves on the city's most influential source of the earthquake was from sources such as fault Cimandiri backgroud, for the city of Bandung earthquake sources that influence the seismic source fault dent background form. In other side, the city of Yogyakarta earthquake hazard curve of the most influential was the source of the earthquake background of the Opak fault, and the most influential hazard curve of Banyuwangi earthquake was the source of Java and Sumba megatruts earthquake.
The 2014 update to the National Seismic Hazard Model in California
Powers, Peter; Field, Edward H.
2015-01-01
The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.
SEISRISK II; a computer program for seismic hazard estimation
Bender, Bernice; Perkins, D.M.
1982-01-01
The computer program SEISRISK II calculates probabilistic ground motion values for use in seismic hazard mapping. SEISRISK II employs a model that allows earthquakes to occur as points within source zones and as finite-length ruptures along faults. It assumes that earthquake occurrences have a Poisson distribution, that occurrence rates remain constant during the time period considered, that ground motion resulting from an earthquake is a known function of magnitude and distance, that seismically homogeneous source zones are defined, that fault locations are known, that fault rupture lengths depend on magnitude, and that earthquake rates as a function of magnitude are specified for each source. SEISRISK II calculates for each site on a grid of sites the level of ground motion that has a specified probability of being exceeded during a given time period. The program was designed to process a large (essentially unlimited) number of sites and sources efficiently and has been used to produce regional and national maps of seismic hazard.}t is a substantial revision of an earlier program SEISRISK I, which has never been documented. SEISRISK II runs considerably [aster and gives more accurate results than the earlier program and in addition includes rupture length and acceleration variability which were not contained in the original version. We describe the model and how it is implemented in the computer program and provide a flowchart and listing of the code.
Earthquake Potential Models for China
NASA Astrophysics Data System (ADS)
Rong, Y.; Jackson, D. D.
2002-12-01
We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic model we derived the uniform upper magnitude limit from the special catalog and assumed local a-values proportional to maximum horizontal strain rate. In prospective tests the geodetic model agrees well with earthquake occurrence. The smoothed seismicity model performs best of the four models.
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
NASA Astrophysics Data System (ADS)
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
NASA Astrophysics Data System (ADS)
Gomez-Gonzalez, J. M.; Mellors, R.
2007-05-01
We investigate the kinematics of the rupture process for the September 27, 2003, Mw7.3, Altai earthquake and its associated large aftershocks. This is the largest earthquake striking the Altai mountains within the last 50 years, which provides important constraints on the ongoing tectonics. The fault plane solution obtained by teleseismic body waveform modeling indicated a predominantly strike-slip event (strike=130, dip=75, rake 170), Scalar moment for the main shock ranges from 0.688 to 1.196E+20 N m, a source duration of about 20 to 42 s, and an average centroid depth of 10 km. Source duration would indicate a fault length of about 130 - 270 km. The main shock was followed closely by two aftershocks (Mw5.7, Mw6.4) occurred the same day, another aftershock (Mw6.7) occurred on 1 October , 2003. We also modeled the second aftershock (Mw6.4) to asses geometric similarities during their respective rupture process. This aftershock occurred spatially very close to the mainshock and possesses a similar fault plane solution (strike=128, dip=71, rake=154), and centroid depth (13 km). Several local conditions, such as the crustal model and fault geometry, affect the correct estimation of some source parameters. We perfume a sensitivity evaluation of several parameters, including centroid depth, scalar moment and source duration, based on a point and finite source modeling. The point source approximation results are the departure parameters for the finite source exploration. We evaluate the different reported parameters to discard poor constrained models. In addition, deformation data acquired by InSAR are also included in the analysis.
Methodology to determine the parameters of historical earthquakes in China
NASA Astrophysics Data System (ADS)
Wang, Jian; Lin, Guoliang; Zhang, Zhe
2017-12-01
China is one of the countries with the longest cultural tradition. Meanwhile, China has been suffering very heavy earthquake disasters; so, there are abundant earthquake recordings. In this paper, we try to sketch out historical earthquake sources and research achievements in China. We will introduce some basic information about the collections of historical earthquake sources, establishing intensity scale and the editions of historical earthquake catalogues. Spatial-temporal and magnitude distributions of historical earthquake are analyzed briefly. Besides traditional methods, we also illustrate a new approach to amend the parameters of historical earthquakes or even identify candidate zones for large historical or palaeo-earthquakes. In the new method, a relationship between instrumentally recorded small earthquakes and strong historical earthquakes is built up. Abundant historical earthquake sources and the achievements of historical earthquake research in China are of valuable cultural heritage in the world.
NASA Astrophysics Data System (ADS)
Zarola, Amit; Sil, Arjun
2018-04-01
This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.
NASA Astrophysics Data System (ADS)
Aochi, Hideo
2014-05-01
The Marmara region (Turkey) along the North Anatolian fault is known as a high potential of large earthquakes in the next decades. For the purpose of seismic hazard/risk evaluation, kinematic and dynamic source models have been proposed (e.g. Oglesby and Mai, GJI, 2012). In general, the simulated earthquake scenarios depend on the hypothesis and cannot be verified before the expected earthquake. We then introduce a probabilistic insight to give the initial/boundary conditions to statistically analyze the simulated scenarios. We prepare different fault geometry models, tectonic loading and hypocenter locations. We keep the same framework of the simulation procedure as the dynamic rupture process of the adjacent 1999 Izmit earthquake (Aochi and Madariaga, BSSA, 2003), as the previous models were able to reproduce the seismological/geodetic aspects of the event. Irregularities in fault geometry play a significant role to control the rupture progress, and a relatively large change in geometry may work as barriers. The variety of the simulate earthquake scenarios should be useful for estimating the variety of the expected ground motion.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the earthquake hazard and endorse the use of all credible earthquake probability models for the region, including the empirical model, with appropriate weighting, as was done in WGCEP (2002).
Earthquake Hazard and Risk in New Zealand
NASA Astrophysics Data System (ADS)
Apel, E. V.; Nyst, M.; Fitzenz, D. D.; Molas, G.
2014-12-01
To quantify risk in New Zealand we examine the impact of updating the seismic hazard model. The previous RMS New Zealand hazard model is based on the 2002 probabilistic seismic hazard maps for New Zealand (Stirling et al., 2002). The 2015 RMS model, based on Stirling et al., (2012) will update several key source parameters. These updates include: implementation a new set of crustal faults including multi-segment ruptures, updating the subduction zone geometry and reccurrence rate and implementing new background rates and a robust methodology for modeling background earthquake sources. The number of crustal faults has increased by over 200 from the 2002 model, to the 2012 model which now includes over 500 individual fault sources. This includes the additions of many offshore faults in northern, east-central, and southwest regions. We also use the recent data to update the source geometry of the Hikurangi subduction zone (Wallace, 2009; Williams et al., 2013). We compare hazard changes in our updated model with those from the previous version. Changes between the two maps are discussed as well as the drivers for these changes. We examine the impact the hazard model changes have on New Zealand earthquake risk. Considered risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the loss exceedance probability curve used by insurers to address their solvency and manage their portfolio risk. We analyze risk profile changes in areas with large population density and for structures of economic and financial importance. New Zealand is interesting in that the city with the majority of the risk exposure in the country (Auckland) lies in the region of lowest hazard, where we don't have a lot of information about the location of faults and distributed seismicity is modeled by averaged Mw-frequency relationships on area sources. Thus small changes to the background rates can have a large impact on the risk profile for the area. Wellington, another area of high exposure is particularly sensitive to how the Hikurangi subduction zone and the Wellington fault are modeled. Minor changes on these sources have substantial impacts for the risk profile of the city and the country at large.
Bend Faulting at the Edge of a Flat Slab: The 2017 Mw7.1 Puebla-Morelos, Mexico Earthquake
NASA Astrophysics Data System (ADS)
Melgar, Diego; Pérez-Campos, Xyoli; Ramirez-Guzman, Leonardo; Spica, Zack; Espíndola, Victor Hugo; Hammond, William C.; Cabral-Cano, Enrique
2018-03-01
We present results of a slip model from joint inversion of strong motion and static Global Positioning System data for the Mw7.1 Puebla-Morelos earthquake. We find that the earthquake nucleates at the bottom of the oceanic crust or within the oceanic mantle with most of the moment release occurring within the oceanic mantle. Given its location at the edge of the flat slab, the earthquake is likely the result of bending stresses occurring at the transition from flat slab subduction to steeply dipping subduction. The event strikes obliquely to the slab, we find a good agreement between the seafloor fabric offshore the source region and the strike of the earthquake. We argue that the event likely reactivated a fault first created during seafloor formation. We hypothesize that large bending-related events at the edge of the flat slab are more likely in areas of low misalignment between the seafloor fabric and the slab strike where reactivation of preexisting structures is favored. This hypothesis predicts decreased likelihood of bending-related events northwest of the 2017 source region but also suggests that they should be more likely southeast of the 2017 source region.
NASA Astrophysics Data System (ADS)
Ataeva, G.; Gitterman, Y.; Shapira, A.
2017-01-01
This study analyzes and compares the P- and S-wave displacement spectra from local earthquakes and explosions of similar magnitudes. We propose a new approach to discrimination between low-magnitude shallow earthquakes and explosions by using ratios of P- to S-wave corner frequencies as a criterion. We have explored 2430 digital records of the Israeli Seismic Network (ISN) from 456 local events (226 earthquakes, 230 quarry blasts, and a few underwater explosions) of magnitudes Md = 1.4-3.4, which occurred at distances up to 250 km during 2001-2013 years. P-wave and S-wave displacement spectra were computed for all events following Brune's source model of earthquakes (1970, 1971) and applying the distance correction coefficients (Shapira and Hofstetter, Teconophysics 217:217-226, 1993; Ataeva G, Shapira A, Hofstetter A, J Seismol 19:389-401, 2015), The corner frequencies and moment magnitudes were determined using multiple stations for each event, and then the comparative analysis was performed.
The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation
NASA Astrophysics Data System (ADS)
Silva, F.; Goulet, C. A.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.
2016-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100 Hz) ground motions for earthquakes at regional scales. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The BBP scientific software modules implement kinematic rupture generation, low- and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, several ground motion intensity measure calculations, and various ground motion goodness-of-fit tools. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground-motion seismograms, using multiple alternative ground motion simulation methods, and software utilities to generate tables, plots, and maps. The BBP has been developed over the last five years in a collaborative project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The SCEC BBP software released in 2016 can be compiled and run on recent Linux and Mac OS X systems with GNU compilers. It includes five simulation methods, seven simulation regions covering California, Japan, and Eastern North America, and the ability to compare simulation results against empirical ground motion models (aka GMPEs). The latest version includes updated ground motion simulation methods, a suite of new validation metrics and a simplified command line user interface.
A Composite Source Model With Fractal Subevent Size Distribution
NASA Astrophysics Data System (ADS)
Burjanek, J.; Zahradnik, J.
A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.
Neo-deterministic seismic hazard assessment in North Africa
NASA Astrophysics Data System (ADS)
Mourabit, T.; Abou Elenean, K. M.; Ayadi, A.; Benouar, D.; Ben Suleman, A.; Bezzeghoud, M.; Cheddadi, A.; Chourak, M.; ElGabry, M. N.; Harbi, A.; Hfaiedh, M.; Hussein, H. M.; Kacem, J.; Ksentini, A.; Jabour, N.; Magrin, A.; Maouche, S.; Meghraoui, M.; Ousadou, F.; Panza, G. F.; Peresan, A.; Romdhane, N.; Vaccari, F.; Zuccolo, E.
2014-04-01
North Africa is one of the most earthquake-prone areas of the Mediterranean. Many devastating earthquakes, some of them tsunami-triggering, inflicted heavy loss of life and considerable economic damage to the region. In order to mitigate the destructive impact of the earthquakes, the regional seismic hazard in North Africa is assessed using the neo-deterministic, multi-scenario methodology (NDSHA) based on the computation of synthetic seismograms, using the modal summation technique, at a regular grid of 0.2 × 0.2°. This is the first study aimed at producing NDSHA maps of North Africa including five countries: Morocco, Algeria, Tunisia, Libya, and Egypt. The key input data for the NDSHA algorithm are earthquake sources, seismotectonic zonation, and structural models. In the preparation of the input data, it has been really important to go beyond the national borders and to adopt a coherent strategy all over the area. Thanks to the collaborative efforts of the teams involved, it has been possible to properly merge the earthquake catalogues available for each country to define with homogeneous criteria the seismogenic zones, the characteristic focal mechanism associated with each of them, and the structural models used to model wave propagation from the sources to the sites. As a result, reliable seismic hazard maps are produced in terms of maximum displacement ( D max), maximum velocity ( V max), and design ground acceleration.
Deterministic seismic hazard macrozonation of India
NASA Astrophysics Data System (ADS)
Kolathayar, Sreevalsa; Sitharam, T. G.; Vipin, K. S.
2012-10-01
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°-38°N and 68°-98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
On The Computation Of The Best-fit Okada-type Tsunami Source
NASA Astrophysics Data System (ADS)
Miranda, J. M. A.; Luis, J. M. F.; Baptista, M. A.
2017-12-01
The forward simulation of earthquake-induced tsunamis usually assumes that the initial sea surface elevation mimics the co-seismic deformation of the ocean bottom described by a simple "Okada-type" source (rectangular fault with constant slip in a homogeneous elastic half space). This approach is highly effective, in particular in far-field conditions. With this assumption, and a given set of tsunami waveforms recorded by deep sea pressure sensors and (or) coastal tide stations it is possible to deduce the set of parameters of the Okada-type solution that best fits a set of sea level observations. To do this, we build a "space of possible tsunami sources-solution space". Each solution consists of a combination of parameters: earthquake magnitude, length, width, slip, depth and angles - strike, rake, and dip. To constrain the number of possible solutions we use the earthquake parameters defined by seismology and establish a range of possible values for each parameter. We select the "best Okada source" by comparison of the results of direct tsunami modeling using the solution space of tsunami sources. However, direct tsunami modeling is a time-consuming process for the whole solution space. To overcome this problem, we use a precomputed database of Empirical Green Functions to compute the tsunami waveforms resulting from unit water sources and search which one best matches the observations. In this study, we use as a test case the Solomon Islands tsunami of 6 February 2013 caused by a magnitude 8.0 earthquake. The "best Okada" source is the solution that best matches the tsunami recorded at six DART stations in the area. We discuss the differences between the initial seismic solution and the final one obtained from tsunami data This publication received funding of FCT-project UID/GEO/50019/2013-Instituto Dom Luiz.
A Test Case for the Source Inversion Validation: The 2014 ML 5.5 Orkney, South Africa Earthquake
NASA Astrophysics Data System (ADS)
Ellsworth, W. L.; Ogasawara, H.; Boettcher, M. S.
2017-12-01
The ML5.5 earthquake of August 5, 2014 occurred on a near-vertical strike slip fault below abandoned and active gold mines near Orkney, South Africa. A dense network of surface and in-mine seismometers recorded the earthquake and its aftershock sequence. In-situ stress measurements and rock samples through the damage zone and rupture surface are anticipated to be available from the "Drilling into Seismogenic Zones of M2.0-M5.5 Earthquakes in South African gold mines" project (DSeis) that is currently progressing toward the rupture zone (Science, doi: 10.1126/science.aan6905). As of 24 July, 95% of drilled core has been recovered from a 427m-section of the 1st hole from 2.9 km depth with minimal core discing and borehole breakouts. A 2nd hole is planned to intersect the fault at greater depth. Absolute differential stress will be measured along the holes and frictional characteristics of the recovered core will be determined in the lab. Surface seismic reflection data and exploration drilling from the surface down to the mining horizon at 3km depth is also available to calibrate the velocity structure above the mining horizon and image reflective geological boundaries and major faults below the mining horizon. The remarkable quality and range of geophysical data available for the Orkney earthquake makes this event an ideal test case for the Source Inversion Validation community using actual seismic data to determine the spatial and temporal evolution of earthquake rupture. We invite anyone with an interest in kinematic modeling to develop a rupture model for the Orkney earthquake. Seismic recordings of the earthquake and information on the faulting geometry can be found in Moyer et al. (2017, doi: 10.1785/0220160218). A workshop supported by the Southern California Earthquake Center will be held in the spring of 2018 to compare kinematic models. Those interested in participating in the modeling exercise and the workshop should contact the authors for additional information.
Harmsen, Stephen C.; Hartzell, Stephen
2008-01-01
Models of the Santa Clara Valley (SCV) 3D velocity structure and 3D finite-difference software are used to predict ground motions from scenario earthquakes on the San Andreas (SAF), Monte Vista/Shannon, South Hayward, and Calaveras faults. Twenty different scenario ruptures are considered that explore different source models with alternative hypocenters, fault dimensions, and rupture velocities and three different velocity models. Ground motion from the full wave field up to 1 Hz is exhibited as maps of peak horizontal velocity and pseudospectral acceleration at periods of 1, 3, and 5 sec. Basin edge effects and amplification in sedimentary basins of the SCV are observed that exhibit effects from shallow sediments with relatively low shear-wave velocity (330 m/sec). Scenario earthquakes have been simulated for events with the following magnitudes: (1) M 6.8–7.4 Calaveras sources, (2) M 6.7–6.9 South Hayward sources, (3) M 6.7 Monte Vista/Shannon sources, and (4) M 7.1–7.2 Peninsula segment of the SAF sources. Ground motions are strongly influenced by source parameters such as rupture velocity, rise time, maximum depth of rupture, hypocenter, and source directivity. Cenozoic basins also exert a strong influence on ground motion. For example, the Evergreen Basin on the northeastern side of the SCV is especially responsive to 3–5-sec energy from most scenario earthquakes. The Cupertino Basin on the southwestern edge of the SCV tends to be highly excited by many Peninsula and Monte Vista fault scenarios. Sites over the interior of the Evergreen Basin can have long-duration coda that reflect the trapping of seismic energy within this basin. Plausible scenarios produce predominantly 5-sec wave trains with greater than 30 cm/sec sustained ground-motion amplitude with greater than 30 sec duration within the Evergreen Basin.
NASA Astrophysics Data System (ADS)
Harada, Tomoya; Satake, Kenji; Furumura, Takashi
2017-04-01
We carried out tsunami numerical simulations in the western Pacific Ocean and East China Sea in order to examine the behavior of massive tsunami outside Japan from the hypothetical M 9 tsunami source models along the Nankai Trough proposed by the Cabinet Office of Japanese government (2012). The distribution of MTHs (maximum tsunami heights for 24 h after the earthquakes) on the east coast of China, the east coast of the Philippine Islands, and north coast of the New Guinea Island show peaks with approximately 1.0-1.7 m,4.0-7.0 m,4.0-5.0 m, respectively. They are significantly higher than that from the 1707 Ho'ei earthquake (M 8.7), the largest earthquake along the Nankai trough in recent Japanese history. Moreover, the MTH distributions vary with the location of the huge slip(s) in the tsunami source models although the three coasts are far from the Nankai trough. Huge slip(s) in the Nankai segment mainly contributes to the MTHs, while huge slip(s) or splay faulting in the Tokai segment hardly affects the MTHs. The tsunami source model was developed for responding to the unexpected occurrence of the 2011 Tohoku Earthquake, with 11 models along the Nanakai trough, and simulated MTHs along the Pacific coasts of the western Japan from these models exceed 10 m, with a maximum height of 34.4 m. Tsunami propagation was computed by the finite-difference method of the non-liner long-wave equations with the Corioli's force and bottom friction (Satake, 1995) in the area of 115-155 ° E and 8° S-40° N. Because water depth of the East China Sea is shallower than 200 m, the tsunami propagation is likely to be affected by the ocean bottom fiction. The 30 arc-seconds gridded bathymetry data provided by the General Bathymetric Chart of the Oceans (GEBCO-2014) are used. For long propagation of tsunami we simulated tsunamis for 24 hours after the earthquakes. This study was supported by the"New disaster mitigation research project on Mega thrust earthquakes around Nankai/Ryukyu subduction zones", a project of Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT).
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh
2014-06-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
NASA Astrophysics Data System (ADS)
Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati
2018-05-01
The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth ( H = 19 km), the seismic moment ( M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism ( φ = 280°, δ = 14°, λ = 84°), the source radius ( a = 1.3 km), and the static stress drop (Δ σ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q( f) = 500 f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δ σ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.
NASA Astrophysics Data System (ADS)
Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati
2018-02-01
The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth (H = 19 km), the seismic moment (M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism (φ = 280°, δ = 14°, λ = 84°), the source radius (a = 1.3 km), and the static stress drop (Δσ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q(f) = 500f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δσ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.
NASA Astrophysics Data System (ADS)
Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia
2017-04-01
The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.
NASA Astrophysics Data System (ADS)
Neighbors, C.; Noriega, G. R.; Caras, Y.; Cochran, E. S.
2010-12-01
HAZUS-MH MR4 (HAZards U. S. Multi-Hazard Maintenance Release 4) is a risk-estimation software developed by FEMA to calculate potential losses due to natural disasters. Federal, state, regional, and local government use the HAZUS-MH Earthquake Model for earthquake risk mitigation, preparedness, response, and recovery planning (FEMA, 2003). In this study, we examine several parameters used by the HAZUS-MH Earthquake Model methodology to understand how modifying the user-defined settings affect ground motion analysis, seismic risk assessment and earthquake loss estimates. This analysis focuses on both shallow crustal and deep intraslab events in the American Pacific Northwest. Specifically, the historic 1949 Mw 6.8 Olympia, 1965 Mw 6.6 Seattle-Tacoma and 2001 Mw 6.8 Nisqually normal fault intraslab events and scenario large-magnitude Seattle reverse fault crustal events are modeled. Inputs analyzed include variations of deterministic event scenarios combined with hazard maps and USGS ShakeMaps. This approach utilizes the capacity of the HAZUS-MH Earthquake Model to define landslide- and liquefaction- susceptibility hazards with local groundwater level and slope stability information. Where Shakemap inputs are not used, events are run in combination with NEHRP soil classifications to determine site amplification effects. The earthquake component of HAZUS-MH applies a series of empirical ground motion attenuation relationships developed from source parameters of both regional and global historical earthquakes to estimate strong ground motion. Ground motion and resulting ground failure due to earthquakes are then used to calculate, direct physical damage for general building stock, essential facilities, and lifelines, including transportation systems and utility systems. Earthquake losses are expressed in structural, economic and social terms. Where available, comparisons between recorded earthquake losses and HAZUS-MH earthquake losses are used to determine how region coordinators can most effectively utilize their resources for earthquake risk mitigation. This study is being conducted in collaboration with King County, WA officials to determine the best model inputs necessary to generate robust HAZUS-MH models for the Pacific Northwest.
Earthquake source parameters determined by the SAFOD Pilot Hole seismic array
Imanishi, K.; Ellsworth, W.L.; Prejean, S.G.
2004-01-01
We estimate the source parameters of #3 microearthquakes by jointly analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We applied an inversion procedure to estimate spectral parameters for the omega-square model (spectral level and corner frequency) and Q to displacement amplitude spectra. Because we expect spectral parameters and Q to vary slowly with depth in the well, we impose a smoothness constraint on those parameters as a function of depth using a linear first-differenfee operator. This method correctly resolves corner frequency and Q, which leads to a more accurate estimation of source parameters than can be obtained from single sensors. The stress drop of one example of the SAFOD target repeating earthquake falls in the range of typical tectonic earthquakes. Copyright 2004 by the American Geophysical Union.
Time dependent deformation and stress in the lithosphere. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Yang, M.
1980-01-01
Efficient computer programs incorporating frontal solution and time stepping procedure were developed for the modelling of geodynamic problems. This scheme allows for investigating the quasi static phenomena including the effects of the rheological structure of a tectonically active region. From three dimensional models of strike slip earthquakes, it was found that lateral variation of viscosity affects the characteristics of surface deformations. The vertical deformation is especially informative about the viscosity structure in a strike slip fault zone. A three dimensional viscoelastic model of a thrust earthquake indicated that the transient disturbance on plate velocity due to a great plate boundary earthquake is significant at intermediate distances, but becomes barely measurable 1000 km away from the source.
Generalized interferometry - I: theory for interstation correlations
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; Stehly, Laurent; Ermert, Laura; Boehm, Christian
2017-02-01
We develop a general theory for interferometry by correlation that (i) properly accounts for heterogeneously distributed sources of continuous or transient nature, (ii) fully incorporates any type of linear and nonlinear processing, such as one-bit normalization, spectral whitening and phase-weighted stacking, (iii) operates for any type of medium, including 3-D elastic, heterogeneous and attenuating media, (iv) enables the exploitation of complete correlation waveforms, including seemingly unphysical arrivals, and (v) unifies the earthquake-based two-station method and ambient noise correlations. Our central theme is not to equate interferometry with Green function retrieval, and to extract information directly from processed interstation correlations, regardless of their relation to the Green function. We demonstrate that processing transforms the actual wavefield sources and actual wave propagation physics into effective sources and effective wave propagation. This transformation is uniquely determined by the processing applied to the observed data, and can be easily computed. The effective forward model, that links effective sources and propagation to synthetic interstation correlations, may not be perfect. A forward modelling error, induced by processing, describes the extent to which processed correlations can actually be interpreted as proper correlations, that is, as resulting from some effective source and some effective wave propagation. The magnitude of the forward modelling error is controlled by the processing scheme and the temporal variability of the sources. Applying adjoint techniques to the effective forward model, we derive finite-frequency Fréchet kernels for the sources of the wavefield and Earth structure, that should be inverted jointly. The structure kernels depend on the sources of the wavefield and the processing scheme applied to the raw data. Therefore, both must be taken into account correctly in order to make accurate inferences on Earth structure. Not making any restrictive assumptions on the nature of the wavefield sources, our theory can be applied to earthquake and ambient noise data, either separately or combined. This allows us (i) to locate earthquakes using interstation correlations and without knowledge of the origin time, (ii) to unify the earthquake-based two-station method and noise correlations without the need to exclude either of the two data types, and (iii) to eliminate the requirement to remove earthquake signals from noise recordings prior to the computation of correlation functions. In addition to the basic theory for acoustic wavefields, we present numerical examples for 2-D media, an extension to the most general viscoelastic case, and a method for the design of optimal processing schemes that eliminate the forward modelling error completely. This work is intended to provide a comprehensive theoretical foundation of full-waveform interferometry by correlation, and to suggest improvements to current passive monitoring methods.
Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data
Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.
2015-01-01
We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.
Hovsgol earthquake 5 December 2014, M W = 4.9: seismic and acoustic effects
NASA Astrophysics Data System (ADS)
Dobrynina, Anna A.; Sankov, Vladimir A.; Tcydypova, Larisa R.; German, Victor I.; Chechelnitsky, Vladimir V.; Ulzibat, Munkhuu
2018-03-01
A moderate shallow earthquake occurred on 5 December 2014 ( M W = 4.9) in the north of Lake Hovsgol (northern Mongolia). The infrasonic signal with duration 140 s was recorded for this earthquake by the "Tory" infrasound array (Institute of Solar-Terrestrial Physics of the Siberian Branch of the Russian Academy of Science, Russia). Source parameters of the earthquake (seismic moment, geometrical sizes, displacement amplitudes in the focus) were determined using spectral analysis of direct body P and S waves. The spectral analysis of seismograms and amplitude variations of the surface waves allows to determine the effect of the propagation of the rupture in the earthquake focus, the azimuth of the rupture propagation direction and the velocity of displacement in the earthquake focus. The results of modelling of the surface displacements caused by the Hovsgol earthquake and high effective velocity of propagation of infrasound signal ( 625 m/s) indicate that its occurrence is not caused by the downward movement of the Earth's surface in the epicentral region but by the effect of the secondary source. The position of the secondary source of infrasound signal is defined on the northern slopes of the Khamar-Daban ridge according to the data on the azimuth and time of arrival of acoustic wave at the Tory station. The interaction of surface waves with the regional topography is proposed as the most probable mechanism of formation of the infrasound signal.
Seismicity remotely triggered by the magnitude 7.3 landers, california, earthquake
Hill, D.P.; Reasenberg, P.A.; Michael, A.; Arabaz, W.J.; Beroza, G.; Brumbaugh, D.; Brune, J.N.; Castro, R.; Davis, S.; Depolo, D.; Ellsworth, W.L.; Gomberg, J.; Harmsen, S.; House, L.; Jackson, S.M.; Johnston, M.J.S.; Jones, L.; Keller, Rebecca Hylton; Malone, S.; Munguia, L.; Nava, S.; Pechmann, J.C.; Sanford, A.; Simpson, R.W.; Smith, R.B.; Stark, M.; Stickney, M.; Vidal, A.; Walter, S.; Wong, V.; Zollweg, J.
1993-01-01
The magnitude 7.3 Landers earthquake of 28 June 1992 triggered a remarkably sudden and widespread increase in earthquake activity across much of the western United States. The triggered earthquakes, which occurred at distances up to 1250 kilometers (17 source dimensions) from the Landers mainshock, were confined to areas of persistent seismicity and strike-slip to normal faulting. Many of the triggered areas also are sites of geothermal and recent volcanic activity. Static stress changes calculated for elastic models of the earthquake appear to be too small to have caused the triggering. The most promising explanations involve nonlinear interactions between large dynamic strains accompanying seismic waves from the mainshock and crustal fluids (perhaps including crustal magma).
Keranen, K M; Weingarten, M; Abers, G A; Bekins, B A; Ge, S
2014-07-25
Unconventional oil and gas production provides a rapidly growing energy source; however, high-production states in the United States, such as Oklahoma, face sharply rising numbers of earthquakes. Subsurface pressure data required to unequivocally link earthquakes to wastewater injection are rarely accessible. Here we use seismicity and hydrogeological models to show that fluid migration from high-rate disposal wells in Oklahoma is potentially responsible for the largest swarm. Earthquake hypocenters occur within disposal formations and upper basement, between 2- and 5-kilometer depth. The modeled fluid pressure perturbation propagates throughout the same depth range and tracks earthquakes to distances of 35 kilometers, with a triggering threshold of ~0.07 megapascals. Although thousands of disposal wells operate aseismically, four of the highest-rate wells are capable of inducing 20% of 2008 to 2013 central U.S. seismicity. Copyright © 2014, American Association for the Advancement of Science.
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
Hirata, K.; Takahashi, H.; Geist, E.; Satake, K.; Tanioka, Y.; Sugioka, H.; Mikada, H.
2003-01-01
Micro-tsunami waves with a maximum amplitude of 4-6 mm were detected with the ocean-bottom pressure gauges on a cabled deep seafloor observatory south of Hokkaido, Japan, following the January 28, 2000 earthquake (Mw 6.8) in the southern Kuril subduction zone. We model the observed micro-tsunami and estimate the focal depth and other source parameters such as fault length and amount of slip using grid searching with the least-squares method. The source depth and stress drop for the January 2000 earthquake are estimated to be 50 km and 7 MPa, respectively, with possible ranges of 45-55 km and 4-13 MPa. Focal depth of typical inter-plate earthquakes in this region ranges from 10 to 20 km and stress drop of inter-plate earthquakes generally is around 3 MPa. The source depth and stress drop estimates suggest that the earthquake was an intra-slab event in the subducting Pacific plate, rather than an inter-plate event. In addition, for a prescribed fault width of 30 km, the fault length is estimated to be 15 km, with possible ranges of 10-20 km, which is the same as the previously determined aftershock distribution. The corresponding estimate for seismic moment is 2.7x1019 Nm with possible ranges of 2.3x1019-3.2x1019Nm. Standard tide gauges along the nearby coast did not record any tsunami signal. High-precision ocean-bottom pressure measurements offshore thus make it possible to determine fault parameters of moderate-sized earthquakes in subduction zones using open-ocean tsunami waveforms. Published by Elsevier Science B. V.
NASA Astrophysics Data System (ADS)
Zheng, Y.
2016-12-01
On November 22, 2014, the Ms6.3 Kangding earthquake ended 30 years of history of no strong earthquake at the Xianshuihe fault zone. The focal mechanism and centroid depth of the Kangding earthquake are inverted by teleseismic waveforms and regional seismograms with CAP method. The result shows that the two nodal planes of focal mechanism are 235°/82°/-173° and 144°/83°/-8° respectively, the latter nodal plane should be the ruptured fault plane with a focal depth of 9 km. The rupture process model of the Kangding earthquake is obtained by joint inversion of teleseismic data and regional seismograms. The Kangding earthquake is a bilateral earthquake, and the major rupture zone is within a depth range of 5-15 km, spanning 10 km and 12 km along dip and strike directions, and maximum slip is about 0.5m. Most seismic moment was released during the first 5 s and the magnitude is Mw6.01, smaller than the model determined by InSAR data. The discrepancy between co-seismic rupture models of the Kangding and its Ms 5.8 aftershock and the InSAR model implies significant afterslip deformation occurred in the two weeks after the mainshock. The afterslip released energy equals to an Mw5.9 earthquake and mainly concentrates in the northwest side and the shallower side to the rupture zone. The CFS accumulation near the epicenter of the 2014 Kangding earthquake is increased by the 2008 Wenchuan earthquake, implying that the Kangding earthquake could be triggered by the Wenchuan earthquake. The CFS at the northwest section of the seismic gap along the Kangding-daofu segment is increased by the Kanding earthquake, and the rupture slip of the Kangding earthquake sequence is too small to release the accumulated strain in the seismic gap. Consequently, the northwest section of the Kangding-daofu seismic gap is under high seismic hazard in the future.
NASA Astrophysics Data System (ADS)
Heidarzadeh, Mohammad; Harada, Tomoya; Satake, Kenji; Ishibe, Takeo; Takagawa, Tomohiro
2017-12-01
The Wharton Basin, off southwest Sumatra, ruptured to a large intraplate left-lateral strike-slip Mw 7.8 earthquake on 2016 March 2. The epicentre was located ∼800 km to the south of another similar-mechanism intraplate Mw 8.6 earthquake in the same basin on 2012 April 11. Small tsunamis from these strike-slip earthquakes were registered with maximum amplitudes of 0.5-1.5 cm on DARTs and 1-19 cm on tide gauges for the 2016 event, and the respective values of 0.5-6 and 6-40 cm for the 2012 event. By using both teleseismic body waves and tsunami observations of the 2016 event, we obtained optimum slip models with rupture velocity (Vr) in the range of 2.8-3.6 km s-1 belonging to both EW and NS faults. While the EW fault plane cannot be fully ruled out, we chose the best model as the NS fault plane with a Vr of 3.6 km s-1, a maximum slip of 7.7 m and source duration of 33 s. The tsunami energy period bands were 4-15 and 7-24 min for the 2016 and 2012 tsunamis, respectively, reflecting the difference in source sizes. Seismicity in the Wharton Basin is dominated by large strike-slip events including the 2012 (Mw 8.6 and 8.2) and 2016 (Mw 7.8) events, indicating that these events are possible tsunami sources in the Wharton Basin. Cumulative number and cumulative seismic-moment curves revealed that most earthquakes are of strike-slip mechanisms and the largest seismic-moment is provided by the strike-slip earthquakes in this basin.
Comparison of actual and seismologically inferred stress drops in dynamic models of microseismicity
NASA Astrophysics Data System (ADS)
Lin, Y. Y.; Lapusta, N.
2017-12-01
Estimating source parameters for small earthquakes is commonly based on either Brune or Madariaga source models. These models assume circular rupture that starts from the center of a fault and spreads axisymmetrically with a constant rupture speed. The resulting stress drops are moment-independent, with large scatter. However, more complex source behaviors are commonly discovered by finite-fault inversions for both large and small earthquakes, including directivity, heterogeneous slip, and non-circular shapes. Recent studies (Noda, Lapusta, and Kanamori, GJI, 2013; Kaneko and Shearer, GJI, 2014; JGR, 2015) have shown that slip heterogeneity and directivity can result in large discrepancies between the actual and estimated stress drops. We explore the relation between the actual and seismologically estimated stress drops for several types of numerically produced microearthquakes. For example, an asperity-type circular fault patch with increasing normal stress towards the middle of the patch, surrounded by a creeping region, is a potentially common microseismicity source. In such models, a number of events rupture the portion of the patch near its circumference, producing ring-like ruptures, before a patch-spanning event occurs. We calculate the far-field synthetic waveforms for our simulated sources and estimate their spectral properties. The distribution of corner frequencies over the focal sphere is markedly different for the ring-like sources compared to the Madariaga model. Furthermore, most waveforms for the ring-like sources are better fitted by a high-frequency fall-off rate different from the commonly assumed value of 2 (from the so-called omega-squared model), with the average value over the focal sphere being 1.5. The application of Brune- or Madariaga-type analysis to these sources results in the stress drops estimates different from the actual stress drops by a factor of up to 125 in the models we considered. We will report on our current studies of other types of seismic sources, such as repeating earthquakes and foreshock-like events, and whether the potentially realistic and common sources different from the standard Brune and Madariaga models can be identified from their focal spectral signatures and studied using a more tailored seismological analysis.
Uncertainties in Earthquake Loss Analysis: A Case Study From Southern California
NASA Astrophysics Data System (ADS)
Mahdyiar, M.; Guin, J.
2005-12-01
Probabilistic earthquake hazard and loss analyses play important roles in many areas of risk management, including earthquake related public policy and insurance ratemaking. Rigorous loss estimation for portfolios of properties is difficult since there are various types of uncertainties in all aspects of modeling and analysis. It is the objective of this study to investigate the sensitivity of earthquake loss estimation to uncertainties in regional seismicity, earthquake source parameters, ground motions, and sites' spatial correlation on typical property portfolios in Southern California. Southern California is an attractive region for such a study because it has a large population concentration exposed to significant levels of seismic hazard. During the last decade, there have been several comprehensive studies of most regional faults and seismogenic sources. There have also been detailed studies on regional ground motion attenuations and regional and local site responses to ground motions. This information has been used by engineering seismologists to conduct regional seismic hazard and risk analysis on a routine basis. However, one of the more difficult tasks in such studies is the proper incorporation of uncertainties in the analysis. From the hazard side, there are uncertainties in the magnitudes, rates and mechanisms of the seismic sources and local site conditions and ground motion site amplifications. From the vulnerability side, there are considerable uncertainties in estimating the state of damage of buildings under different earthquake ground motions. From an analytical side, there are challenges in capturing the spatial correlation of ground motions and building damage, and integrating thousands of loss distribution curves with different degrees of correlation. In this paper we propose to address some of these issues by conducting loss analyses of a typical small portfolio in southern California, taking into consideration various source and ground motion uncertainties. The approach is designed to integrate loss distribution functions with different degrees of correlation for portfolio analysis. The analysis is based on USGS 2002 regional seismicity model.
Geological and historical evidence of irregular recurrent earthquakes in Japan.
Satake, Kenji
2015-10-28
Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).
Understanding Earthquake Fault Systems Using QuakeSim Analysis and Data Assimilation Tools
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay; Glasscoe, Margaret; Granat, Robert; Rundle, John; McLeod, Dennis; Al-Ghanmi, Rami; Grant, Lisa
2008-01-01
We are using the QuakeSim environment to model interacting fault systems. One goal of QuakeSim is to prepare for the large volumes of data that spaceborne missions such as DESDynI will produce. QuakeSim has the ability to ingest distributed heterogenous data in the form of InSAR, GPS, seismicity, and fault data into various earthquake modeling applications, automating the analysis when possible. Virtual California simulates interacting faults in California. We can compare output from long time history Virtual California runs with the current state of strain and the strain history in California. In addition to spaceborne data we will begin assimilating data from UAVSAR airborne flights over the San Francisco Bay Area, the Transverse Ranges, and the Salton Trough. Results of the models are important for understanding future earthquake risk and for providing decision support following earthquakes. Improved models require this sensor web of different data sources, and a modeling environment for understanding the combined data.
Gravity increase before the 2015 Mw 7.8 Nepal earthquake
NASA Astrophysics Data System (ADS)
Chen, Shi; Liu, Mian; Xing, Lelin; Xu, Weimin; Wang, Wuxing; Zhu, Yiqing; Li, Hui
2016-01-01
The 25 April 2015 Nepal earthquake (Mw 7.8) ruptured a segment of the Himalayan front fault zone. Four absolute gravimetric stations in southern Tibet, surveyed from 2010/2011 to 2013 and corrected for secular variations, recorded up to 22.40 ± 1.11 μGal/yr of gravity increase during this period. The gravity increase is distinct from the long-wavelength secular trends of gravity decrease over the Tibetan Plateau and may be related to interseismic mass change around the locked plate interface under the Himalayan-Tibetan Plateau. We modeled the source region as a disk of 580 km in diameter, which is consistent with the notion that much of the southern Tibetan crust is involved in storing strain energy that drives the Himalayan earthquakes. If validated in other regions, high-precision ground measurements of absolute gravity may provide a useful method for monitoring mass changes in the source regions of potential large earthquakes.
Near real-time aftershock hazard maps for earthquakes
NASA Astrophysics Data System (ADS)
McCloskey, J.; Nalbant, S. S.
2009-04-01
Stress interaction modelling is routinely used to explain the spatial relationships between earthquakes and their aftershocks. On 28 October 2008 a M6.4 earthquake occurred near the Pakistan-Afghanistan border killing several hundred and causing widespread devastation. A second M6.4 event occurred 12 hours later 20km to the south east. By making some well supported assumptions concerning the source event and the geometry of any likely triggered event it was possible to map those areas most likely to experience further activity. Using Google earth, it would further have been possible to identify particular settlements in the source area which were particularly at risk and to publish their locations globally within about 3 hours of the first earthquake. Such actions could have significantly focused the initial emergency response management. We argue for routine prospective testing of such forecasts and dialogue between social and physical scientists and emergency response professionals around the practical application of these techniques.
NASA Astrophysics Data System (ADS)
Huang, H.; Lin, C.
2010-12-01
The Tai-Tung earthquake (ML=6.2) occurred at the southeastern part of Taiwan on April 1, 2006. We examine the source model of this event using the observed seismograms by CWBSN at five stations surrounding the source area. An objective estimation method was used to obtain the parameters N and C which are needed for the empirical Green’s function method by Irikura (1986). This method is called “source spectral ratio fitting method” which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tai-Tung mainshock in 2006 was estimated by comparing the observed waveforms with synthetics using empirical Green’s function method. The size of the asperity is about 3.5 km length along the strike direction by 7.0 km width along the dip direction. The rupture started at the left-bottom of the asperity and extended radially to the right-upper direction.
NASA Astrophysics Data System (ADS)
Huang, H.-C.; Lin, C.-Y.
2012-04-01
The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.
NASA Astrophysics Data System (ADS)
Huang, H.; Lin, C.
2012-12-01
The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.
NASA Astrophysics Data System (ADS)
Kiuchi, R.; Mori, J. J.
2015-12-01
As a way to understand the characteristics of the earthquake source, studies of source parameters (such as radiated energy and stress drop) and their scaling are important. In order to estimate source parameters reliably, often we must use appropriate source spectrum models and the omega-square model is most frequently used. In this model, the spectrum is flat in lower frequencies and the falloff is proportional to the angular frequency squared. However, Some studies (e.g. Allmann and Shearer, 2009; Yagi et al., 2012) reported that the exponent of the high frequency falloff is other than -2. Therefore, in this study we estimate the source parameters using a spectral model for which the falloff exponent is not fixed. We analyze the mainshock and larger aftershocks of the 2008 Iwate-Miyagi Nairiku earthquake. Firstly, we calculate the P wave and SH wave spectra using empirical Green functions (EGF) to remove the path effect (such as attenuation) and site effect. For the EGF event, we select a smaller earthquake that is highly-correlated with the target event. In order to obtain the stable results, we calculate the spectral ratios using a multitaper spectrum analysis (Prieto et al., 2009). Then we take a geometric mean from multiple stations. Finally, using the obtained spectra ratios, we perform a grid search to determine the high frequency falloffs, as well as corner frequency of both of events. Our results indicate the high frequency falloff exponent is often less than 2.0. We do not observe any regional, focal mechanism, or depth dependencies for the falloff exponent. In addition, our estimated corner frequencies and falloff exponents are consistent between the P wave and SH wave analysis. In our presentation, we show differences in estimated source parameters using a fixed omega-square model and a model allowing variable high-frequency falloff.
Solomon Islands 2007 Tsunami Near-Field Modeling and Source Earthquake Deformation
NASA Astrophysics Data System (ADS)
Uslu, B.; Wei, Y.; Fritz, H.; Titov, V.; Chamberlin, C.
2008-12-01
The earthquake of 1 April 2007 left behind momentous footages of crust rupture and tsunami impact along the coastline of Solomon Islands (Fritz and Kalligeris, 2008; Taylor et al., 2008; McAdoo et al., 2008; PARI, 2008), while the undisturbed tsunami signals were also recorded at nearby deep-ocean tsunameters and coastal tide stations. These multi-dimensional measurements provide valuable datasets to tackle the challenging aspects at the tsunami source directly by inversion from tsunameter records in real time (available in a time frame of minutes), and its relationship with the seismic source derived either from the seismometer records (available in a time frame of hours or days) or from the crust rupture measurements (available in a time frame of months or years). The tsunami measurements in the near field, including the complex vertical crust motion and tsunami runup, are particularly critical to help interpreting the tsunami source. This study develops high-resolution inundation models for the Solomon Islands to compute the near-field tsunami impact. Using these models, this research compares the tsunameter-derived tsunami source with the seismic-derived earthquake sources from comprehensive perceptions, including vertical uplift and subsidence, tsunami runup heights and their distributional pattern among the islands, deep-ocean tsunameter measurements, and near- and far-field tide gauge records. The present study stresses the significance of the tsunami magnitude, source location, bathymetry and topography in accurately modeling the generation, propagation and inundation of the tsunami waves. This study highlights the accuracy and efficiency of the tsunameter-derived tsunami source in modeling the near-field tsunami impact. As the high- resolution models developed in this study will become part of NOAA's tsunami forecast system, these results also suggest expanding the system for potential applications in tsunami hazard assessment, search and rescue operations, as well as event and post-event planning in the Solomon Islands.
Iceberg capsize hydrodynamics and the source of glacial earthquakes
NASA Astrophysics Data System (ADS)
Kaluzienski, Lynn; Burton, Justin; Cathles, Mac
2014-03-01
Accelerated warming in the past few decades has led to an increase in dramatic, singular mass loss events from the Greenland and Antarctic ice sheets, such as the catastrophic collapse of ice shelves on the western antarctic peninsula, and the calving and subsequent capsize of cubic-kilometer scale icebergs in Greenland's outlet glaciers. The latter has been identified as the source of long-period seismic events classified as glacial earthquakes, which occur most frequently in Greenland's summer months. The ability to partially monitor polar mass loss through the Global Seismographic Network is quite attractive, yet this goal necessitates an accurate model of a source mechanism for glacial earthquakes. In addition, the detailed relationship between iceberg mass, geometry, and the measured seismic signal is complicated by inherent difficulties in collecting field data from remote, ice-choked fjords. To address this, we use a laboratory scale model to measure aspects of the post-fracture calving process not observable in nature. Our results show that the combination of mechanical contact forces and hydrodynamic pressure forces generated by the capsize of an iceberg adjacent to a glacier's terminus produces a dipolar strain which is reminiscent of a single couple seismic source.
A New Network-Based Approach for the Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Alessandro, C.; Zollo, A.; Colombelli, S.; Elia, L.
2017-12-01
Here we propose a new method which allows for issuing an early warning based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The system includes the techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. For stations providing high quality data, the characteristic P-wave period (τc) and the P-wave displacement, velocity and acceleration amplitudes (Pd, Pv and Pa) are jointly measured on a progressively expanded P-wave time window. The evolutionary estimate of these parameters at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (IMM) and by interpolating the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. We have tested this system by a retrospective analysis of three earthquakes: 2016 Italy 6.5 Mw, 2008 Iwate-Miyagi 6.9 Mw and 2011 Tohoku 9.0 Mw. Source parameters characterization are stable and reliable, also the intensity map shows extended source effects consistent with kinematic fracture models of evets.
NASA Astrophysics Data System (ADS)
Laksono, Y. A.; Brotopuspito, K. S.; Suryanto, W.; Widodo; Wardah, R. A.; Rudianto, I.
2018-03-01
In order to study the structure subsurface at Merapi Lawu anomaly (MLA) using forward modelling or full waveform inversion, it needs a good earthquake source parameters. The best result source parameter comes from seismogram with high signal to noise ratio (SNR). Beside that the source must be near the MLA location and the stations that used as parameters must be outside from MLA in order to avoid anomaly. At first the seismograms are processed by software SEISAN v10 using a few stations from MERAMEX project. After we found the hypocentre that match the criterion we fine-tuned the source parameters using more stations. Based on seismogram from 21 stations, it is obtained the source parameters as follows: the event is at August, 21 2004, on 23:22:47 Indonesia western standard time (IWST), epicentre coordinate -7.80°S, 101.34°E, hypocentre 47.3 km, dominant frequency f0 = 3.0 Hz, the earthquake magnitude Mw = 3.4.
Tkalcic, Hrvoje; Dreger, Douglas S.; Foulger, Gillian R.; Julian, Bruce R.
2009-01-01
A volcanic earthquake with Mw 5.6 occurred beneath the Bárdarbunga caldera in Iceland on 29 September 1996. This earthquake is one of a decade-long sequence of events at Bárdarbunga with non-double-couple mechanisms in the Global Centroid Moment Tensor catalog. Fortunately, it was recorded well by the regional-scale Iceland Hotspot Project seismic experiment. We investigated the event with a complete moment tensor inversion method using regional long-period seismic waveforms and a composite structural model. The moment tensor inversion using data from stations of the Iceland Hotspot Project yields a non-double-couple solution with a 67% vertically oriented compensated linear vector dipole component, a 32% double-couple component, and a statistically insignificant (2%) volumetric (isotropic) contraction. This indicates the absence of a net volumetric component, which is puzzling in the case of a large volcanic earthquake that apparently is not explained by shear slip on a planar fault. A possible volcanic mechanism that can produce an earthquake without a volumetric component involves two offset sources with similar but opposite volume changes. We show that although such a model cannot be ruled out, the circumstances under which it could happen are rare.
NASA Astrophysics Data System (ADS)
Wang, Ruijia; Gu, Yu Jeffrey; Schultz, Ryan; Zhang, Miao; Kim, Ahyi
2017-08-01
On 2016 January 12, an intraplate earthquake with an initial reported local magnitude (ML) of 4.8 shook the town of Fox Creek, Alberta. While there were no reported damages, this earthquake was widely felt by the local residents and suspected to be induced by the nearby hydraulic-fracturing (HF) operations. In this study, we determine the earthquake source parameters using moment tensor inversions, and then detect and locate the associated swarm using a waveform cross-correlation based method. The broad-band seismic recordings from regional arrays suggest a moment magnitude (M) 4.1 for this event, which is the largest in Alberta in the past decade. Similar to other recent M ∼ 3 earthquakes near Fox Creek, the 2016 January 12 earthquake exhibits a dominant strike-slip (strike = 184°) mechanism with limited non-double-couple components (∼22 per cent). This resolved focal mechanism, which is also supported by forward modelling and P-wave first motion analysis, indicates an NE-SW oriented compressional axis consistent with the maximum compressive horizontal stress orientations delineated from borehole breakouts. Further detection analysis on industry-contributed recordings unveils 1108 smaller events within 3 km radius of the epicentre of the main event, showing a close spatial-temporal relation to a nearby HF well. The majority of the detected events are located above the basement, comparable to the injection depth (3.5 km) on the Duvernay shale Formation. The spatial distribution of this earthquake cluster further suggests that (1) the source of the sequence is an N-S-striking fault system and (2) these earthquakes were induced by an HF well close to but different from the well that triggered a previous (January 2015) earthquake swarm. Reactivation of pre-existing, N-S oriented faults analogous to the Pine Creek fault zone, which was reported by earlier studies of active source seismic and aeromagnetic data, are likely responsible for the occurrence of the January 2016 earthquake swarm and other recent events in the Crooked Lake area.
Dense Array Studies of Volcano-Tectonic and Long-Period Earthquakes Beneath Mount St. Helens
NASA Astrophysics Data System (ADS)
Glasgow, M. E.; Hansen, S. M.; Schmandt, B.; Thomas, A.
2017-12-01
A 904 single-component 10-Hz geophone array deployed within 15 km of Mount St. Helens (MSH) in 2014 recorded continuously for two-weeks. Automated reverse-time imaging (RTI) was used to generate a catalog of 212 earthquakes. Among these, two distinct types of upper crustal (<8 km) earthquakes were classified. Volcano-tectonic (VT) and long-period (LP) earthquakes were identified using analysis of array spectrograms, envelope functions, and velocity waveforms. To remove analyst subjectivity, quantitative classification criteria were developed based on the ratio of power in high and low frequency bands and coda duration. Prior to the 2014 experiment, upper crustal LP earthquakes had only been reported at MSH during volcanic activity. Subarray beamforming was used to distinguish between LP earthquakes and surface generated LP signals, such as rockfall. This method confirmed 16 LP signals with horizontal velocities exceeding that of upper crustal P-wave velocities, which requires a subsurface hypocenter. LP and VT locations overlap in a cluster slightly east of the summit crater from 0-5 km below sea level. LP displacement spectra are similar to simple theoretical predictions for shear failure except that they have lower corner frequencies than VT earthquakes of similar magnitude. The results indicate a distinct non-resonant source for LP earthquakes which are located in the same source volume as some VT earthquakes (within hypocenter uncertainty of 1 km or less). To further investigate MSH microseismicity mechanisms, a 142 three-component (3-C) 5 Hz geophone array will record continuously for one month at MSH in Fall 2017 providing a unique dataset for a volcano earthquake source study. This array will help determine if LP occurrence in 2014 was transient or if it is still ongoing. Unlike the 2014 array, approximately 50 geophones will be deployed in the MSH summit crater directly over the majority of seismicity. RTI will be used to detect and locate earthquakes by back-projecting 3-C data with a local 3-D P and S velocity model. Earthquakes will be classified using the previously stated techniques, and we will seek to use the dense array of 3-C waveforms to invert for focal mechanisms and, ideally, moment tensor sources down to M0.
NASA Astrophysics Data System (ADS)
Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang
2017-09-01
Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.
Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions
NASA Astrophysics Data System (ADS)
Kim, A.; Dreger, D.; Larsen, S.
2008-12-01
We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0.25 Hz but that the velocity model is fast at stations located very close to the fault. In this near-fault zone the model also underpredicts the amplitudes. This implies the need to include an additional low velocity zone in the fault zone to fit the data. For the finite fault modeling we use the same stations as in our previous study (Kim and Dreger 2008), and compare the results to investigate the effect of 3D Green's functions on kinematic source inversions. References: Brocher, T. M., (2005), Empirical relations between elastic wavespeeds and density in the Earth's crust, Bull. Seism. Soc. Am., 95, No. 6, 2081-2092. Eberhart-Phillips, D., and A.J. Michael, (1993), Three-dimensional velocity structure and seismicity in the Parkfield region, central California, J. Geophys. Res., 98, 15,737-15,758. Kim A., D. S. Dreger (2008), Rupture process of the 2004 Parkfield earthquake from near-fault seismic waveform and geodetic records, J. Geophys. Res., 113, B07308. Thurber, C., H. Zhang, F. Waldhauser, J. Hardebeck, A. Michaels, and D. Eberhart-Phillips (2006), Three- dimensional compressional wavespeed model, earthquake relocations, and focal mechanisms for the Parkfield, California, region, Bull. Seism. Soc. Am., 96, S38-S49. Larsen, S., and C. A. Schultz (1995), ELAS3D: 2D/3D elastic finite-difference wave propagation code, Technical Report No. UCRL-MA-121792, 19pp. Liu, P., and R. J. Archuleta (2004), A new nonlinear finite fault inversion with three-dimensional Green's functions: Application to the 1989 Loma Prieta, California, earthquake, J. Geophys. Res., 109, B02318.
Earthquake nucleation on faults with rate-and state-dependent strength
Dieterich, J.H.
1992-01-01
Dieterich, J.H., 1992. Earthquake nucleation on faults with rate- and state-dependent strength. In: T. Mikumo, K. Aki, M. Ohnaka, L.J. Ruff and P.K.P. Spudich (Editors), Earthquake Source Physics and Earthquake Precursors. Tectonophysics, 211: 115-134. Faults with rate- and state-dependent constitutive properties reproduce a range of observed fault slip phenomena including spontaneous nucleation of slip instabilities at stresses above some critical stress level and recovery of strength following slip instability. Calculations with a plane-strain fault model with spatially varying properties demonstrate that accelerating slip precedes instability and becomes localized to a fault patch. The dimensions of the fault patch follow scaling relations for the minimum critical length for unstable fault slip. The critical length is a function of normal stress, loading conditions and constitutive parameters which include Dc, the characteristic slip distance. If slip starts on a patch that exceeds the critical size, the length of the rapidly accelerating zone tends to shrink to the characteristic size as the time of instability approaches. Solutions have been obtained for a uniform, fixed-patch model that are in good agreement with results from the plane-strain model. Over a wide range of conditions, above the steady-state stress, the logarithm of the time to instability linearly decreases as the initial stress increases. Because nucleation patch length and premonitory displacement are proportional to Dc, the moment of premonitory slip scales by D3c. The scaling of Dc is currently an open question. Unless Dc for earthquake faults is significantly greater than that observed on laboratory faults, premonitory strain arising from the nucleation process for earthquakes may by too small to detect using current observation methods. Excluding the possibility that Dc in the nucleation zone controls the magnitude of the subsequent earthquake, then the source dimensions of the smallest earthquakes in a region provide an upper limit for the size of the nucleation patch. ?? 1992.
NASA Astrophysics Data System (ADS)
Latcharote, Panon; Suppasri, Anawat; Imamura, Fumihiko; Aytore, Betul; Yalciner, Ahmet Cevdet
2016-12-01
This study evaluates tsunami hazards in the Marmara Sea from possible worst-case tsunami scenarios that are from submarine earthquakes and landslides. In terms of fault-generated tsunamis, seismic ruptures can propagate along the North Anatolian Fault (NAF), which has produced historical tsunamis in the Marmara Sea. Based on the past studies, which consider fault-generated tsunamis and landslide-generated tsunamis individually, future scenarios are expected to generate tsunamis, and submarine landslides could be triggered by seismic motion. In addition to these past studies, numerical modeling has been applied to tsunami generation and propagation from combined earthquake and landslide sources. In this study, tsunami hazards are evaluated from both individual and combined cases of submarine earthquakes and landslides through numerical tsunami simulations with a grid size of 90 m for bathymetry and topography data for the entire Marmara Sea region and validated with historical observations from the 1509 and 1894 earthquakes. This study implements TUNAMI model with a two-layer model to conduct numerical tsunami simulations, and the numerical results show that the maximum tsunami height could reach 4.0 m along Istanbul shores for a full submarine rupture of the NAF, with a fault slip of 5.0 m in the eastern and western basins of the Marmara Sea. The maximum tsunami height for landslide-generated tsunamis from small, medium, and large of initial landslide volumes (0.15, 0.6, and 1.5 km3, respectively) could reach 3.5, 6.0, and 8.0 m, respectively, along Istanbul shores. Possible tsunamis from submarine landslides could be significantly higher than those from earthquakes, depending on the landslide volume significantly. These combined earthquake and landslide sources only result in higher tsunami amplitudes for small volumes significantly because of amplification within the same tsunami amplitude scale (3.0-4.0 m). Waveforms from all the coasts around the Marmara Sea indicate that other residential areas might have had a high risk of tsunami hazards from submarine landslides, which can generate higher tsunami amplitudes and shorter arrival times, compared to Istanbul.
Observations and modeling of the elastogravity signals preceding direct seismic waves
NASA Astrophysics Data System (ADS)
Vallée, Martin; Ampuero, Jean Paul; Juhel, Kévin; Bernard, Pascal; Montagner, Jean-Paul; Barsuglia, Matteo
2017-12-01
After an earthquake, the earliest deformation signals are not expected to be carried by the fastest (P) elastic waves but by the speed-of-light changes of the gravitational field. However, these perturbations are weak and, so far, their detection has not been accurate enough to fully understand their origins and to use them for a highly valuable rapid estimate of the earthquake magnitude. We show that gravity perturbations are particularly well observed with broadband seismometers at distances between 1000 and 2000 kilometers from the source of the 2011, moment magnitude 9.1, Tohoku earthquake. We can accurately model them by a new formalism, taking into account both the gravity changes and the gravity-induced motion. These prompt elastogravity signals open the window for minute time-scale magnitude determination for great earthquakes.
Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)
NASA Astrophysics Data System (ADS)
Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.
2016-06-01
We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.
Barkan, R.; ten Brink, Uri S.; Lin, J.
2009-01-01
The great Lisbon earthquake of November 1st, 1755 with an estimated moment magnitude of 8.5-9.0 was the most destructive earthquake in European history. The associated tsunami run-up was reported to have reached 5-15??m along the Portuguese and Moroccan coasts and the run-up was significant at the Azores and Madeira Island. Run-up reports from a trans-oceanic tsunami were documented in the Caribbean, Brazil and Newfoundland (Canada). No reports were documented along the U.S. East Coast. Many attempts have been made to characterize the 1755 Lisbon earthquake source using geophysical surveys and modeling the near-field earthquake intensity and tsunami effects. Studying far field effects, as presented in this paper, is advantageous in establishing constraints on source location and strike orientation because trans-oceanic tsunamis are less influenced by near source bathymetry and are unaffected by triggered submarine landslides at the source. Source location, fault orientation and bathymetry are the main elements governing transatlantic tsunami propagation to sites along the U.S. East Coast, much more than distance from the source and continental shelf width. Results of our far and near-field tsunami simulations based on relative amplitude comparison limit the earthquake source area to a region located south of the Gorringe Bank in the center of the Horseshoe Plain. This is in contrast with previously suggested sources such as Marqu??s de Pombal Fault, and Gulf of C??diz Fault, which are farther east of the Horseshoe Plain. The earthquake was likely to be a thrust event on a fault striking ~ 345?? and dipping to the ENE as opposed to the suggested earthquake source of the Gorringe Bank Fault, which trends NE-SW. Gorringe Bank, the Madeira-Tore Rise (MTR), and the Azores appear to have acted as topographic scatterers for tsunami energy, shielding most of the U.S. East Coast from the 1755 Lisbon tsunami. Additional simulations to assess tsunami hazard to the U.S. East Coast from possible future earthquakes along the Azores-Iberia plate boundary indicate that sources west of the MTR and in the Gulf of Cadiz may affect the southeastern coast of the U.S. The Azores-Iberia plate boundary west of the MTR is characterized by strike-slip faults, not thrusts, but the Gulf of Cadiz may have thrust faults. Southern Florida seems to be at risk from sources located east of MTR and South of the Gorringe Bank, but it is mostly shielded by the Bahamas. Higher resolution near-shore bathymetry along the U.S. East Coast and the Caribbean as well as a detailed study of potential tsunami sources in the central west part of the Horseshoe Plain are necessary to verify our simulation results. ?? 2008 Elsevier B.V.
The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan
NASA Astrophysics Data System (ADS)
Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.
2011-12-01
Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain faults in Taiwan. By accomplishing active fault parameters table in Taiwan, we would apply it in time-dependent earthquake hazard assessment. The result can also give engineers a reference for design. Furthermore, it can be applied in the seismic hazard map to mitigate disasters.
Langbein, John O.
2015-01-01
The 24 August 2014 Mw6.0 South Napa, California earthquake produced significant offsets on 12 borehole strainmeters in the San Francisco Bay area. These strainmeters are located between 24 and 80 km from the source and the observed offsets ranged up to 400 parts-per-billion (ppb), which exceeds their nominal precision by a factor of 100. However, the observed offsets of tidally calibrated strains differ by up to 130 ppb from predictions based on a moment tensor derived from seismic data. The large misfit can be attributed to a combination of poor instrument calibration and better modeling of the strain fit from the earthquake. Borehole strainmeters require in-situ calibration, which historically has been accomplished by comparing their measurements of Earth tides with the strain-tides predicted by a model. Although the borehole strainmeter accurately measure the deformation within the borehole, the long-wavelength strain signals from tides or other tectonic processes recorded in the borehole are modified by the presence of the borehole and the elastic properties of the grout and the instrument. Previous analyses of surface-mounted, strainmeter data and their relationship with the predicted tides suggest that tidal models could be in error by 30%. The poor fit of the borehole strainmeter data from this earthquake can be improved by simultaneously varying the components of the model tides up to 30% and making small adjustments to the point-source model of the earthquake, which reduces the RMS misfit from 130 ppb to 18 ppb. This suggests that relying on tidal models to calibrate borehole strainmeters significantly reduces their accuracy.
The real evidence of effects from source to freefield as base for nonlinear seismology
NASA Astrophysics Data System (ADS)
Marmureanu, Gheorghe; Marmureanu, Alexandru; Ortanza Cioflan, Carmen-; -Florinela Manea, Elena
2014-05-01
Authors developed in last time the concept of "Nonlinear Seismology-The Seismology of the XXI Century". Prof. P. M. Shearer, California Univ. in last book:(i) Strong ground accelerations from large earthquakes can produce a non-linear response in shallow soils; (ii) The shaking from large earthquakes cannot be predicted by simple scaling of records from small earthquakes; (iii) This is an active area of research in strong motion and engineering seismology. Aki: Nonlinear amplification at sediments sites appears to be more pervasive than seismologists used to think. Any attempt at seismic zonation must take into account the local site condition and this nonlinear amplification (Tectonophysics, 218, 93-111, 1993). The difficulty to seismologists in demonstrating the nonlinear site effects has been due to the effect being overshadowed by the overall patterns of shock generation and propagation. In other words, the seismological detection of the nonlinear site effects requires a simultaneous understanding and splitting up (if it is possible…and if it is necessary!) the effects of earthquake source, propagation path and local geological site conditions. To see the actual influence of nonlinearity of the whole system (seismic source-path propagation-local geological structure) the authors used to study the free field response spectra which are the last in this chain and they are the ones who are taken into account in seismic design of all structures. Soils from last part of this system(source-freefield) exhibit a strong nonlinear behaviour under cyclic loading conditions and although have many common mechanical properties require the use of different models to describe behavior differences. Sands typically have low rheological properties and can be modeled with an acceptable linear elastic model and clays which frequently presents significant changes over time can be modeled by a nonlinear viscoelastic model The real evidence of site effects from source to freefield analysis was conducted by using spectral amplification factors for last strong and deep Vrancea earthquakes (March 04,1977;MW =7.5;h=94.5 km; August 30,1986;MW=7.1;h=134.5 km; May 30 1009;MW=6.0;h=90.9 km; May 31, 1990; MW=6.4 ;h=86.9 km).The amplification factors decrease with increasing the magnitudes of strong Vrancea earthquakes and these values are far of that given by Regulatory Guide 1.60 of the U. S. Atomic Energy Commission and IAEA Vienna. The concept was used for last Stress Test asked by IAEA Vienna for Romanian Cernavoda Nuclear Power Plant.. The spectral amplification factors were: SAF= 4.07 (MW =7.1); 4.74(MW=6.9) and 5.78 (MW=6.4), unction of earthquake magnitude. The analysis indicates that the effect of nonlinearity could be very important and if the analysis is made for peak accelerations, it is 48.87% smaller assuming that response of soil to earthquake with MW=6.4, it is still in elastic domain. In other 25 seismic stations here are values between 14.2% and 55.4%. The authors are coming with new quantitative real and recorded data in extra-Carpathian area with large alluvial deposits / sediments, thick Quaternary layers etc.
W phase source inversion for moderate to large earthquakes (1990-2010)
Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.
2012-01-01
Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the waveforms of the target event. To deal with such difficult situations, we remove the perturbation caused by earlier disturbing events by subtracting the corresponding synthetics from the data. The CMT parameters for the disturbed event can then be retrieved using the residual seismograms. We also explore the feasibility of obtaining source parameters of smaller earthquakes in the range 6.0 ≤Mw w= 6 or larger.
NASA Astrophysics Data System (ADS)
Kumagai, Hiroyuki; Pulido, Nelson; Fukuyama, Eiichi; Aoi, Shin
2013-01-01
investigate source processes of the 2011 Tohoku-Oki earthquake, we utilized a source location method using high-frequency (5-10 Hz) seismic amplitudes. In this method, we assumed far-field isotropic radiation of S waves, and conducted a spatial grid search to find the best fitting source locations along the subducted slab in each successive time window. Our application of the method to the Tohoku-Oki earthquake resulted in artifact source locations at shallow depths near the trench caused by limited station coverage and noise effects. We then assumed various source node distributions along the plate, and found that the observed seismograms were most reasonably explained when assuming deep source nodes. This result suggests that the high-frequency seismic waves were radiated at deeper depths during the earthquake, a feature which is consistent with results obtained from teleseismic back-projection and strong-motion source model studies. We identified three high-frequency subevents, and compared them with the moment-rate function estimated from low-frequency seismograms. Our comparison indicated that no significant moment release occurred during the first high-frequency subevent and the largest moment-release pulse occurred almost simultaneously with the second high-frequency subevent. We speculated that the initial slow rupture propagated bilaterally from the hypocenter toward the land and trench. The landward subshear rupture propagation consisted of three successive high-frequency subevents. The trenchward propagation ruptured the strong asperity and released the largest moment near the trench.
Real-time determination of the worst tsunami scenario based on Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya
2016-04-01
In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length, width, depth and fault-position. Note that strike is limited with the range obtained from 90 scenarios calculation. From 900 scenarios, we determine the worst tsunami scenarios from disaster management point of view, such as the one with shortest travel time and the highest water level. The method was applied to a hypothetical-earthquake, and verified if it can effectively search the worst tsunami source scenario in real-time, to be used as an input of real-time tsunami inundation forecasting.
NASA Astrophysics Data System (ADS)
Pedraza, P.; Poveda, E.; Blanco Chia, J. F.; Zahradnik, J.
2013-05-01
On September 30th, 2012, an earthquake of magnitude Mw 7.2 occurred at the depth of ~170km in the southeast of Colombia. This seismic event is associated to the Nazca plate drifting eastward relative the South America plate. The distribution of seismicity obtained by the National Seismological Network of Colombia (RSNC) since 1993 shows a segmented subduction zone with varying dip angles. The earthquake occurred in a seismic gap zone of intermediate depth. The recent deployment of broadband seismic stations on the Colombian, as a part of the Colombian Seismological Network, operated by the Colombian Survey, has provided high-quality data to study rupture process. We estimated the moment tensor, the centroid position, and the source time function. The parameters were obtained by inverting waveforms recorded by RSNC at distances 100 km to 800 km, and modeled at 0.01-0.09Hz, using different 1D crustal models, taking advantage of the ISOLA code. The DC-percentage of the earthquake is very high (~90%). The focal mechanism is mostly normal, hence the determination of the fault plane is challenging. An attempt to determine the fault plane was made based on mutual relative position of the centroid and hypocenter (H-C method). Studies in progress are devoted to searching possible complexity of the fault rupture process (total duration of about 15 seconds), quantified by multiple-point source models.
NASA Astrophysics Data System (ADS)
Toni, Mostafa; Barth, Andreas; Ali, Sherif M.; Wenzel, Friedemann
2016-09-01
On 22 January 2013 an earthquake with local magnitude ML 4.1 occurred in the central part of the Gulf of Suez. Six months later on 1 June 2013 another earthquake with local magnitude ML 5.1 took place at the same epicenter and different depths. These two perceptible events were recorded and localized by the Egyptian National Seismological Network (ENSN) and additional networks in the region. The purpose of this study is to determine focal mechanisms and source parameters of both earthquakes to analyze their tectonic relation. We determine the focal mechanisms by applying moment tensor inversion and first motion analysis of P- and S-waves. Both sources reveal oblique focal mechanisms with normal faulting and strike-slip components on differently oriented faults. The source mechanism of the larger event on 1 June in combination with the location of aftershock sequence indicates a left-lateral slip on N-S striking fault structure in 21 km depth that is in conformity with the NE-SW extensional Shmin (orientation of minimum horizontal compressional stress) and the local fault pattern. On the other hand, the smaller earthquake on 22 January with a shallower hypocenter in 16 km depth seems to have happened on a NE-SW striking fault plane sub-parallel to Shmin. Thus, here an energy release on a transfer fault connecting dominant rift-parallel structures might have resulted in a stress transfer, triggering the later ML 5.1 earthquake. Following Brune's model and using displacement spectra, we calculate the dynamic source parameters for the two events. The estimated source parameters for the 22 January 2013 and 1 June 2013 earthquakes are fault length (470 and 830 m), stress drop (1.40 and 2.13 MPa), and seismic moment (5.47E+21 and 6.30E+22 dyn cm) corresponding to moment magnitudes of MW 3.8 and 4.6, respectively.
Estimation of splitting functions from Earth's normal mode spectra using the neighbourhood algorithm
NASA Astrophysics Data System (ADS)
Pachhai, Surya; Tkalčić, Hrvoje; Masters, Guy
2016-01-01
The inverse problem for Earth structure from normal mode data is strongly non-linear and can be inherently non-unique. Traditionally, the inversion is linearized by taking partial derivatives of the complex spectra with respect to the model parameters (i.e. structure coefficients), and solved in an iterative fashion. This method requires that the earthquake source model is known. However, the release of energy in large earthquakes used for the analysis of Earth's normal modes is not simple. A point source approximation is often inadequate, and a more complete account of energy release at the source is required. In addition, many earthquakes are required for the solution to be insensitive to the initial constraints and regularization. In contrast to an iterative approach, the autoregressive linear inversion technique conveniently avoids the need for earthquake source parameters, but it also requires a number of events to achieve full convergence when a single event does not excite all singlets well. To build on previous improvements, we develop a technique to estimate structure coefficients (and consequently, the splitting functions) using a derivative-free parameter search, known as neighbourhood algorithm (NA). We implement an efficient forward method derived using the autoregresssion of receiver strips, and this allows us to search over a multiplicity of structure coefficients in a relatively short time. After demonstrating feasibility of the use of NA in synthetic cases, we apply it to observations of the inner core sensitive mode 13S2. The splitting function of this mode is dominated by spherical harmonic degree 2 axisymmetric structure and is consistent with the results obtained from the autoregressive linear inversion. The sensitivity analysis of multiple events confirms the importance of the Bolivia, 1994 earthquake. When this event is used in the analysis, as little as two events are sufficient to constrain the splitting functions of 13S2 mode. Apart from not requiring the knowledge of earthquake source, the newly developed technique provides an approximate uncertainty measure of the structure coefficients and allows us to control the type of structure solved for, for example to establish if elastic structure is sufficient.
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay W.; Lyzenga, Gregory A.; Granat, Robert A.; Norton, Charles D.; Rundle, John B.; Pierce, Marlon E.; Fox, Geoffrey C.; McLeod, Dennis; Ludwig, Lisa Grant
2012-01-01
QuakeSim 2.0 improves understanding of earthquake processes by providing modeling tools and integrating model applications and various heterogeneous data sources within a Web services environment. QuakeSim is a multisource, synergistic, data-intensive environment for modeling the behavior of earthquake faults individually, and as part of complex interacting systems. Remotely sensed geodetic data products may be explored, compared with faults and landscape features, mined by pattern analysis applications, and integrated with models and pattern analysis applications in a rich Web-based and visualization environment. Integration of heterogeneous data products with pattern informatics tools enables efficient development of models. Federated database components and visualization tools allow rapid exploration of large datasets, while pattern informatics enables identification of subtle, but important, features in large data sets. QuakeSim is valuable for earthquake investigations and modeling in its current state, and also serves as a prototype and nucleus for broader systems under development. The framework provides access to physics-based simulation tools that model the earthquake cycle and related crustal deformation. Spaceborne GPS and Inter ferometric Synthetic Aperture (InSAR) data provide information on near-term crustal deformation, while paleoseismic geologic data provide longerterm information on earthquake fault processes. These data sources are integrated into QuakeSim's QuakeTables database system, and are accessible by users or various model applications. UAVSAR repeat pass interferometry data products are added to the QuakeTables database, and are available through a browseable map interface or Representational State Transfer (REST) interfaces. Model applications can retrieve data from Quake Tables, or from third-party GPS velocity data services; alternatively, users can manually input parameters into the models. Pattern analysis of GPS and seismicity data has proved useful for mid-term forecasting of earthquakes, and for detecting subtle changes in crustal deformation. The GPS time series analysis has also proved useful as a data-quality tool, enabling the discovery of station anomalies and data processing and distribution errors. Improved visualization tools enable more efficient data exploration and understanding. Tools provide flexibility to science users for exploring data in new ways through download links, but also facilitate standard, intuitive, and routine uses for science users and end users such as emergency responders.
NASA Astrophysics Data System (ADS)
Maeda, T.; Furumura, T.; Noguchi, S.; Takemura, S.; Iwai, K.; Lee, S.; Sakai, S.; Shinohara, M.
2011-12-01
The fault rupture of the 2011 Tohoku (Mw9.0) earthquake spread approximately 550 km by 260 km with a long source rupture duration of ~200 s. For such large earthquake with a complicated source rupture process the radiation of seismic wave from the source rupture and initiation of tsunami due to the coseismic deformation is considered to be very complicated. In order to understand such a complicated process of seismic wave, coseismic deformation and tsunami, we proposed a unified approach for total modeling of earthquake induced phenomena in a single numerical scheme based on a finite-difference method simulation (Maeda and Furumura, 2011). This simulation model solves the equation of motion of based on the linear elastic theory with equilibrium between quasi-static pressure and gravity in the water column. The height of tsunami is obtained from this simulation as a vertical displacement of ocean surface. In order to simulate seismic waves, ocean acoustics, coseismic deformations, and tsunami from the 2011 Tohoku earthquake, we assembled a high-resolution 3D heterogeneous subsurface structural model of northern Japan. The area of simulation is 1200 km x 800 km and 120 km in depth, which have been discretized with grid interval of 1 km in horizontal directions and 0.25 km in vertical direction, respectively. We adopt a source-rupture model proposed by Lee et al. (2011) which is obtained by the joint inversion of teleseismic, near-field strong motion, and coseismic deformation. For conducting such a large-scale simulation, we fully parallelized our simulation code based on a domain-partitioning procedure which achieved a good speed-up by parallel computing up to 8192 core processors with parallel efficiency of 99.839%. The simulation result demonstrates clearly the process in which the seismic wave radiates from the complicated source rupture over the fault plane and propagating in heterogeneous structure of northern Japan. Then, generation of tsunami from coseismic ground deformation at sea floor due to the earthquake and propagation is also well demonstrated . The simulation also demonstrates that a very large slip up to 40 m at shallow plate boundary near the trench pushes up sea floor with source rupture propagation, and the highly elevated sea surface gradually start propagation as tsunamis due to the gravity. The result of simulation of vertical-component displacement waveform matches the ocean-bottom pressure gauge record which is installed just above the source fault area (Maeda et al., 2011) very consistently. Strong reverberation of the ocean-acoustic waves between sea surface and sea bottom particularly near the Japan Trench for long time after the source rupture ends is confirmed in the present simulation. Accordingly, long wavetrains of high-frequency ocean acoustic waves is developed and overlap to later tsunami waveforms as we found in the observations.
Coulomb stress change of crustal faults in Japan for 21 years, estimated from GNSS displacement
NASA Astrophysics Data System (ADS)
Nishimura, T.
2017-12-01
Coulomb stress is one of the simplest index to show how the fault is close to a brittle failure (e.g., earthquake). Many previous studies used the Coulomb stress change (ΔCFS) to evaluate whether the fault approaches failure and successfully explained an earthquake triggered by previous earthquakes and volcanic sources. Most studies use a model of a half-space medium with given rheological properties, boundary conditions, dislocation, etc. to calculate ΔCFS. However, Ueda and Takahashi (2005) proposed to calculate DCFS directly from surface displacement observed by GNSS. There are 6 independent components of stress tensor in an isotropic elastic medium. On the surface of the half-space medium, 3 components should be zero because of no traction on the surface. This means the stress change on the surface is calculated from the surface strain change using Hooke's law. Although an earthquake does not occur on surface, the stress change on the surface may approximate that at a depth of a shallow crustal earthquake (e.g., 10 km) if the source is far from the point at which we calculate the stress change. We tested it by comparing ΔCFS from the surface displacement and that from elastic fault models for past earthquakes. We first estimate a strain change with a method of Shen et al.(1996 JGR) from surface displacement and then calculate ΔCFS for a targeted focal mechanism. Although ΔCFS in the vicinity of the source fault cannot be reproduced from the surface displacement, surface displacement gives a good approximation of ΔCFS in a region 50 km away from the source if the target mechanism is a vertical strike-slip fault. It suggests that GNSS observation can give useful information on a recent change of earthquake potential. We, therefore, calculate the temporal evolution of ΔCFS on active faults in southwest Japan from April 1996 using surface displacement at GNSS stations. We used parameters for the active faults used for evaluation of strong motion by the Earthquake Research Committee. When we use 0.4 for an effective frictional coefficient, ΔCFS increased at most active faults in the Kyushu region by up to 50 KPa for 21 years. On the other hand, ΔCFS did not always increase at active faults in the Kinki region.
NASA Astrophysics Data System (ADS)
Song, S. G.
2016-12-01
Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study
NASA Astrophysics Data System (ADS)
Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.
2014-12-01
Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.
The Global Earthquake Model - Past, Present, Future
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Stein, Ross
2014-05-01
The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic Source Models • Ground Motion (Attenuation) Models • Physical Exposure Models • Physical Vulnerability Models • Composite Index Models (social vulnerability, resilience, indirect loss) • Repository of national hazard models • Uniform global hazard model Armed with these tools and databases, stakeholders worldwide will then be able to calculate, visualise and investigate earthquake risk, capture new data and to share their findings for joint learning. Earthquake hazard information will be able to be combined with data on exposure (buildings, population) and data on their vulnerability, for risk assessment around the globe. Furthermore, for a truly integrated view of seismic risk, users will be able to add social vulnerability and resilience indices and estimate the costs and benefits of different risk management measures. Having finished its first five-year Work Program at the end of 2013, GEM has entered into its second five-year Work Program 2014-2018. Beyond maintaining and enhancing the products developed in Work Program 1, the second phase will have a stronger focus on regional hazard and risk activities, and on seeing GEM products used for risk assessment and risk management practice at regional, national and local scales. Furthermore GEM intends to partner with similar initiatives underway for other natural perils, which together are needed to meet the need for advanced risk assessment methods, tools and data to underpin global disaster risk reduction efforts under the Hyogo Framework for Action #2 to be launched in Sendai/Japan in spring 2015
Toward regional corrections of long period CMT inversions using InSAR
NASA Astrophysics Data System (ADS)
Shakibay Senobari, N.; Funning, G.; Ferreira, A. M.
2017-12-01
One of InSAR's main strengths, with respect to other methods of studying earthquakes, is finding the accurate location of the best point source (or `centroid') for an earthquake. While InSAR data have great advantages for study of shallow earthquakes, the number of earthquakes for which we have InSAR data is low, compared with the number of earthquakes recorded seismically. And though improvements to SAR satellite constellations have enhanced the use of InSAR data during earthquake response, post-event data still have a latency on the order of days. On the other hand, earthquake centroid inversion methods using long period seismic data (e.g. the Global CMT method) are fast but include errors caused by inaccuracies in both the Earth velocity model and in wave propagation assumptions (e.g. Hjörleifsdóttir and Ekström, 2010; Ferreira and Woodhouse, 2006). Here we demonstrate a method that combines the strengths of both methods, calculating regional travel-time corrections for long-period waveforms using accurate centroid locations from InSAR, then applying these to other events that occur in the same region. Our method is based on the observation that synthetic seismograms produced from InSAR source models and locations match the data very well except for some phase shifts (travel time biases) between the two waveforms, likely corresponding to inaccuracies in Earth velocity models (Weston et al., 2014). Our previous work shows that adding such phase shifts to the Green's functions can improve the accuracy of long period seismic CMT inversions by reducing tradeoffs between the moment tensor components and centroid location (e.g. Shakibay Senobari et al., AGU Fall Meeting 2015). Preliminary work on several pairs of neighboring events (e.g. Landers-Hector Mine, the 2000 South Iceland earthquake sequences) shows consistent azimuthal patterns of these phase shifts for nearby events at common stations. These phase shift patterns strongly suggest that it is possible to determine regional corrections for the source regions of these events. The aim of this project is to perform a full CMT inversion using the phase shift corrections, calculated for nearby events, to observe improvement in CMT locations and solutions. We will demonstrate our method on the five M 6 events that occurred in central Italy between 1997 and 2016.
Investigation of Pre-Earthquake Ionospheric Disturbances by 3D Tomographic Analysis
NASA Astrophysics Data System (ADS)
Yagmur, M.
2016-12-01
Ionospheric variations before earthquakes have been widely discussed phenomena in ionospheric studies. To clarify the source and mechanism of these phenomena is highly important for earthquake forecasting. To well understanding the mechanical and physical processes of pre-seismic Ionospheric anomalies that might be related even with Lithosphere-Atmosphere-Ionosphere-Magnetosphere Coupling, both statistical and 3D modeling analysis are needed. For these purpose, firstly we have investigated the relation between Ionospheric TEC Anomalies and potential source mechanisms such as space weather activity and lithospheric phenomena like positive surface electric charges. To distinguish their effects on Ionospheric TEC, we have focused on pre-seismically active days. Then, we analyzed the statistical data of 54 earthquakes that M≽6 between 2000 and 2013 as well as the 2011 Tohoku and the 2016 Kumamoto Earthquakes in Japan. By comparing TEC anomaly and Solar activity by Dst Index, we have found that 28 events that might be related with Earthquake activity. Following the statistical analysis, we also investigate the Lithospheric effect on TEC change on selected days. Among those days, we have chosen two case studies as the 2011 Tohoku and the 2016 Kumamoto Earthquakes to make 3D reconstructed images by utilizing 3D Tomography technique with Neural Networks. The results will be presented in our presentation. Keywords : Earthquake, 3D Ionospheric Tomography, Positive and Negative Anomaly, Geomagnetic Storm, Lithosphere
NASA Astrophysics Data System (ADS)
Lay, T.; Ammon, C. J.
2010-12-01
An unusually large number of widely distributed great earthquakes have occurred in the past six years, with extensive data sets of teleseismic broadband seismic recordings being available in near-real time for each event. Numerous research groups have implemented finite-fault inversions that utilize the rapidly accessible teleseismic recordings, and slip models are regularly determined and posted on websites for all major events. The source inversion validation project has already demonstrated that for events of all sizes there is often significant variability in models for a given earthquake. Some of these differences can be attributed to variations in data sets and procedures used for including signals with very different bandwidth and signal characteristics into joint inversions. Some differences can also be attributed to choice of velocity structure and data weighting. However, our experience is that some of the primary causes of solution variability involve rupture model parameterization and imposed kinematic constraints such as rupture velocity and subfault source time function description. In some cases it is viable to rapidly perform separate procedures such as teleseismic array back-projection or surface wave directivity analysis to reduce the uncertainties associated with rupture velocity, and it is possible to explore a range of subfault source parameterizations to place some constraints on which model features are robust. In general, many such tests are performed, but not fully described, with single model solutions being posted or published, with limited insight into solution confidence being conveyed. Using signals from recent great earthquakes in the Kuril Islands, Solomon Islands, Peru, Chile and Samoa, we explore issues of uncertainty and robustness of solutions that can be rapidly obtained by inversion of teleseismic signals. Formalizing uncertainty estimates remains a formidable undertaking and some aspects of that challenge will be addressed.
Magnitude, moment, and measurement: The seismic mechanism controversy and its resolution.
Miyake, Teru
This paper examines the history of two related problems concerning earthquakes, and the way in which a theoretical advance was involved in their resolution. The first problem is the development of a physical, as opposed to empirical, scale for measuring the size of earthquakes. The second problem is that of understanding what happens at the source of an earthquake. There was a controversy about what the proper model for the seismic source mechanism is, which was finally resolved through advances in the theory of elastic dislocations. These two problems are linked, because the development of a physically-based magnitude scale requires an understanding of what goes on at the seismic source. I will show how the theoretical advances allowed seismologists to re-frame the questions they were trying to answer, so that the data they gathered could be brought to bear on the problem of seismic sources in new ways. Copyright © 2017 Elsevier Ltd. All rights reserved.
Research on Collection of Earthquake Disaster Information from the Crowd
NASA Astrophysics Data System (ADS)
Nian, Z.
2017-12-01
In China, the assessment of the earthquake disasters information is mainly based on the inversion of the seismic source mechanism and the pre-calculated population data model, the real information of the earthquake disaster is usually collected through the government departments, the accuracy and the speed need to be improved. And in a massive earthquake like the one in Mexico, the telecommunications infrastructure on ground were damaged , the quake zone was difficult to observe by satellites and aircraft in the bad weather. Only a bit of information was sent out through maritime satellite of other country. Thus, the timely and effective development of disaster relief was seriously affected. Now Chinese communication satellites have been orbiting, people don't only rely on the ground telecom base station to keep communication with the outside world, to open the web page,to land social networking sites, to release information, to transmit images and videoes. This paper will establish an earthquake information collection system which public can participate. Through popular social platform and other information sources, the public can participate in the collection of earthquake information, and supply quake zone information, including photos, video, etc.,especially those information made by unmanned aerial vehicle (uav) after earthqake, the public can use the computer, potable terminals, or mobile text message to participate in the earthquake information collection. In the system, the information will be divided into earthquake zone basic information, earthquake disaster reduction information, earthquake site information, post-disaster reconstruction information etc. and they will been processed and put into database. The quality of data is analyzed by multi-source information, and is controlled by local public opinion on them to supplement the data collected by government departments timely and implement the calibration of simulation results ,which will better guide disaster relief scheduling and post-disaster reconstruction. In the future ,we will work hard to raise public awareness, to train their consciousness of public participation and to improve the quality of public supply data.
Modeling Explosion Induced Aftershocks
NASA Astrophysics Data System (ADS)
Kroll, K.; Ford, S. R.; Pitarka, A.; Walter, W. R.; Richards-Dinger, K. B.
2017-12-01
Many traditional earthquake-explosion discrimination tools are based on properties of the seismic waveform or their spectral components. Common discrimination methods include estimates of body wave amplitude ratios, surface wave magnitude scaling, moment tensor characteristics, and depth. Such methods are limited by station coverage and noise. Ford and Walter (2010) proposed an alternate discrimination method based on using properties of aftershock sequences as a means of earthquakeexplosion differentiation. Previous studies have shown that explosion sources produce fewer aftershocks that are generally smaller in magnitude compared to aftershocks of similarly sized earthquake sources (Jarpe et al., 1994, Ford and Walter, 2010). It has also been suggested that the explosion-induced aftershocks have smaller Gutenberg- Richter b-values (Ryall and Savage, 1969) and that their rates decay faster than a typical Omori-like sequence (Gross, 1996). To discern whether these observations are generally true of explosions or are related to specific site conditions (e.g. explosion proximity to active faults, tectonic setting, crustal stress magnitudes) would require a thorough global analysis. Such a study, however, is hindered both by lack of evenly distributed explosion-sources and the availability of global seismicity data. Here, we employ two methods to test the efficacy of explosions at triggering aftershocks under a variety of physical conditions. First, we use the earthquake rate equations from Dieterich (1994) to compute the rate of aftershocks related to an explosion source assuming a simple spring-slider model. We compare seismicity rates computed with these analytical solutions to those produced by the 3D, multi-cycle earthquake simulator, RSQSim. We explore the relationship between geological conditions and the characteristics of the resulting explosion-induced aftershock sequence. We also test hypothesis that aftershock generation is dependent upon the frequency content of the passing dynamic seismic waves as suggested by Parsons and Velasco (2009). Lastly, we compare all results of explosion-induced aftershocks with aftershocks generated by similarly sized earthquake sources. Prepared by LLNL under Contract DE-AC52-07NA27344.
Harp, Edwin L.; Jibson, Randall W.; Dart, Richard L.; Margottini, Claudio; Canuti, Paolo; Sassa, Kyoji
2013-01-01
The MW 7.0, 12 January 2010, Haiti earthquake triggered more than 7,000 landslides in the mountainous terrain south of Port-au-Prince over an area that extends approximately 50 km to the east and west from the epicenter and to the southern coast. Most of the triggered landslides were rock and soil slides from 25°–65° slopes within heavily fractured limestone and deeply weathered basalt and basaltic breccia. Landslide volumes ranged from tens of cubic meters to several thousand cubic meters. Rock slides in limestone typically were 2–5 m thick; slides within soils and weathered basalt typically were less than 1 m thick. Twenty to thirty larger landslides having volumes greater than 10,000 m3 were triggered by the earthquake; these included block slides and rotational slumps in limestone bedrock. Only a few landslides larger than 5,000 m3 occurred in the weathered basalt. The distribution of landslides is asymmetric with respect to the fault source and epicenter. Relatively few landslides were triggered north of the fault source on the hanging wall. The densest landslide concentrations lie south of the fault source and the Enriquillo-Plantain-Garden fault zone on the footwall. Numerous landslides also occurred along the south coast west of Jacmél. This asymmetric distribution of landsliding with respect to the fault source is unusual given the modeled displacement of the fault source as mainly thrust motion to the south on a plane dipping to the north at approximately 55°; landslide concentrations in other documented thrust earthquakes generally have been greatest on the hanging wall. This apparent inconsistency of the landslide distribution with respect to the fault model remains poorly understood given the lack of any strong-motion instruments within Haiti during the earthquake.
NASA Astrophysics Data System (ADS)
Kausel, Edgar; Campos, Jaime
1992-08-01
The only known great ( Ms = 8) intermediate depth earthquake localized downdip of the main thrust zone of the Chilean subduction zone occurred landward of Antofagasta on 9 December 1950. In this paper we determine the source parameters and rupture process of this shock by modeling long-period body waves. The source mechanism corresponds to a downdip tensional intraplate event rupturing along a nearly vertical plane with a seismic moment of M0 = 1 × 10 28 dyn cm, of strike 350°, dip 88°, slip 270°, Mw = 7.9 and a stress drop of about 100 bar. The source time function consists of two subevents, the second being responsible for 70% of the total moment release. The unusually large magnitude ( Ms = 8) of this intermediate depth event suggests a rupture through the entire lithosphere. The spatial and temporal stress regime in this region is discussed. The simplest interpretation suggests that a large thrust earthquake should follow the 1950 tensional shock. Considering that the historical record of the region does not show large earthquakes, a 'slow' earthquake can be postulated as an alternative mechanism to unload the thrust zone. A weakly coupled subduction zone—within an otherwise strongly coupled region as evidenced by great earthquakes to the north and south—or the existence of creep are not consistent with the occurrence of a large tensional earthquake in the subducting lithosphere downdip of the thrust zone. The study of focal mechanisms of the outer rise earthquakes would add more information which would help us to infer the present state of stress in the thrust region.
Toward seismic source imaging using seismo-ionospheric data
NASA Astrophysics Data System (ADS)
Rolland, L.; Larmat, C. S.; Mikesell, D.; Sladen, A.; Khelfi, K.; Astafyeva, E.; Lognonne, P. H.
2014-12-01
The worldwide coverage offered by global navigation space systems (GNSS) such as GPS, GLONASS or Galileo allows seismological measurements of a new kind. GNSS-derived total electron content (TEC) measurements can be especially useful to image seismically active zones that are not covered by conventional instruments. For instance, it has been shown that the Japanese dense GPS network GEONET was able to record images of the ionosphere response to the initial coseismic sea-surface motion induced by the great Mw 9.0 2011 Tohoku-Oki earthquake less than 10 minutes after the rupture initiation (Astafyeva et al., 2013). But earthquakes of lower magnitude, down to about 6.5 would also induce measurable ionospheric perturbations, when GNSS stations are located less than 250 km away from the epicenter. In order to make use of these new data, ionospheric seismology needs to develop accurate forward models so that we can invert for quantitative seismic sources parameters. We will present our current understanding of the coupling mechanisms between the solid Earth, the ocean, the atmosphere and the ionosphere. We will also present the state-of-the-art in the modeling of coseismic ionospheric disturbances using acoustic ray theory and a new 3D modeling method based on the Spectral Element Method (SEM). This latter numerical tool will allow us to incorporate lateral variations in the solid Earth properties, the bathymetry and the atmosphere as well as realistic seismic source parameters. Furthermore, seismo-acoustic waves propagate in the atmosphere at a much slower speed (from 0.3 to ~1 km/s) than seismic waves propagate in the solid Earth. We are exploring the application of back-projection and time-reversal methods to TEC observations in order to retrieve the time and space characteristics of the acoustic emission in the seismic source area. We will first show modeling and inversion results with synthetic data. Finally, we will illustrate the imaging capability of our approach with, among other possible examples, the 2011 Mw 9.0 Tohoku-Oki earthquake, Japan, the 2012 Mw 7.8 Haida Gwaii earthquake, Canada and the 2011 Mw 7.1 Van earthquake, Eastern Turkey.
Evaluation of earthquake potential in China
NASA Astrophysics Data System (ADS)
Rong, Yufang
I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.
NASA Astrophysics Data System (ADS)
Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser
2018-03-01
The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.
NASA Astrophysics Data System (ADS)
Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser
2018-04-01
The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.
Research on the spatial analysis method of seismic hazard for island
NASA Astrophysics Data System (ADS)
Jia, Jing; Jiang, Jitong; Zheng, Qiuhong; Gao, Huiying
2017-05-01
Seismic hazard analysis(SHA) is a key component of earthquake disaster prevention field for island engineering, whose result could provide parameters for seismic design microscopically and also is the requisite work for the island conservation planning’s earthquake and comprehensive disaster prevention planning macroscopically, in the exploitation and construction process of both inhabited and uninhabited islands. The existing seismic hazard analysis methods are compared in their application, and their application and limitation for island is analysed. Then a specialized spatial analysis method of seismic hazard for island (SAMSHI) is given to support the further related work of earthquake disaster prevention planning, based on spatial analysis tools in GIS and fuzzy comprehensive evaluation model. The basic spatial database of SAMSHI includes faults data, historical earthquake record data, geological data and Bouguer gravity anomalies data, which are the data sources for the 11 indices of the fuzzy comprehensive evaluation model, and these indices are calculated by the spatial analysis model constructed in ArcGIS’s Model Builder platform.
Relocation of the 2012 Ms 7.0 Lushan Earthquake Aftershock Sequences and Its Implications
NASA Astrophysics Data System (ADS)
Fang, L.; Wu, J.; Sun, Z.; Su, J.; Du, W.
2013-12-01
At 08:02 am on 20 April 2013 (Beijing time), an Ms 7.0 earthquake occurred in Lushan County, Sichuan Province. Lushan earthquake is another devastating earthquake occurred in Sichuan Province after 12 May 2008 Ms 8.0 Wenchuan earthquake. 193 people were killed, 25 people were missing and more than ten thousand people were injured in the earthquake. Direct economic losses were estimated to be more than 80 billion yuan (RMB). Lushan earthquake occurred in the southern part of the Longmenshan fault zone. The distance between the epicenters of Lushan earthquake and Wenchuan earthquake is about 87 km. In an effort to maximize observations of the aftershock sequence and study the seismotetonic model, we deployed 35 temporal seismic stations around the source area. The earthquake was followed by a productive aftershock sequence. By the end of 20 July more than 10,254 aftershocks were recorded by the temporal seismic network. The magnitude of the aftershock ranges from ML-0.5 to ML5.6. We first located the aftershocks using Hypo2000 (Kevin, 2000) and refined the location results with HYPODD (Waldhauser & Ellsworth, 2000). The 1-D velocity model used in relocation is modified from a deep seismic sounding profile near Lushan earthquake (Wang et al., 2007). The Vp/Vs ratio is set to 1.83 according to receiver function h-k study. A total of 8,129 events were relocated. The average location error in N-S, E-W and U-D direction is 0.30, 0.29 and 0.59 km, respectively. The relocation results show that the aftershocks spread approximately 35 km in length and 16 km in width. The dominant distribution of the focal depth ranges from 10 to 20 km. A few earthquakes occurred in the shallow crust. Focal depth sections crossing the source area show that the seismogenic fault dips to the northwest, manifested itself as a listric thrust fault. The dip angle of the seismogenic fault is approximately 63° in the shallow crust, about 41° near the source of the mainshock, and about 17° at the bottom of the fault. The focal depths of 28 aftershocks with ML≥4.0 were determined using Himalaya Seismic Array and sPn phase. The focal depths obtained from sPn phase are consistent with HYPODD, which also reveals a northwest-dipping fault. Since the earthquake did not cause significant surface rupture, the seismogenic structure of Lushan earthquake remains controversial. On the basis of aftershock relocation results, we speculate that the seismogenic fault of Lushan earthquake may be a blind thrust fault on the eastern side of the Shuangshi-Dachuan fault. The relocation results also reveal that there is a southeastward tilt aftershock belt intersecting with the seismogenic fault with y-shape. We infer that it is a back thrust fault that often appears in a thrust fault system. Lushan earthquake triggered the seismic activity of the back thrust fault.
Interactions and triggering in a 3D rate and state asperity model
NASA Astrophysics Data System (ADS)
Dublanchet, P.; Bernard, P.
2012-12-01
Precise relocation of micro-seismicity and careful analysis of seismic source parameters have progressively imposed the concept of seismic asperities embedded in a creeping fault segment as being one of the most important aspect that should appear in a realistic representation of micro-seismic sources. Another important issue concerning micro-seismic activity is the existence of robust empirical laws describing the temporal and magnitude distribution of earthquakes, such as the Omori law, the distribution of inter-event time and the Gutenberg-Richter law. In this framework, this study aims at understanding statistical properties of earthquakes, by generating synthetic catalogs with a 3D, quasi-dynamic continuous rate and state asperity model, that takes into account a realistic geometry of asperities. Our approach contrasts with ETAS models (Kagan and Knopoff, 1981) usually implemented to produce earthquake catalogs, in the sense that the non linearity observed in rock friction experiments (Dieterich, 1979) is fully taken into account by the use of rate and state friction law. Furthermore, our model differs from discrete models of faults (Ziv and Cochard, 2006) because the continuity allows us to define realistic geometries and distributions of asperities by the assembling of sub-critical computational cells that always fail in a single event. Moreover, this model allows us to adress the question of the influence of barriers and distribution of asperities on the event statistics. After recalling the main observations of asperities in the specific case of Parkfield segment of San-Andreas Fault, we analyse earthquake statistical properties computed for this area. Then, we present synthetic statistics obtained by our model that allow us to discuss the role of barriers on clustering and triggering phenomena among a population of sources. It appears that an effective size of barrier, that depends on its frictional strength, controls the presence or the absence, in the synthetic catalog, of statistical laws that are similar to what is observed for real earthquakes. As an application, we attempt to draw a comparison between synthetic statistics and the observed statistics of Parkfield in order to characterize what could be a realistic frictional model of Parkfield area. More generally, we obtained synthetic statistical properties that are in agreement with power-law decays characterized by exponents that match the observations at a global scale, showing that our mechanical model is able to provide new insights into the understanding of earthquake interaction processes in general.
NASA Astrophysics Data System (ADS)
Pulido Hernandez, N. E.; Dalguer Gudiel, L. A.; Aoi, S.
2009-12-01
The Iwate-Miyagi Nairiku earthquake, a reverse earthquake occurred in the southern Iwate prefecture Japan (2008/6/14), produced the largest peak ground acceleration recorded to date (4g) (Aoi et al. 2008), at the West Ichinoseki (IWTH25), KiK-net strong motion station of NIED. This station which is equipped with surface and borehole accelerometers (GL-260), also recorded very high peak accelerations up to 1g at the borehole level, despite being located in a rock site. From comparison of spectrograms of the observed surface and borehole records at IWTH25, Pulido et. al (2008) identified two high frequency (HF) ground motion events located at 4.5s and 6.3s originating at the source, which likely derived in the extreme observed accelerations of 3.9g and 3.5g at IWTH25. In order to understand the generation mechanism of these HF events we performed a dynamic fault rupture model of the Iwate-Miyagi Nairiku earthquake by using the Support Operator Rupture Dynamics (SORD) code, (Ely et al., 2009). SORD solves the elastodynamic equation using a generalized finite difference method that can utilize meshes of arbitrary structure and is capable of handling geometries appropriate to thrust earthquakes. Our spontaneous dynamic rupture model of the Iwate-Miyagi Nairiku earthquake is governed by the simple slip weakening friction law. The dynamic parameters, stress drop, strength excess and critical slip weakening distance are estimated following the procedure described in Pulido and Dalguer (2009) [PD09]. These parameters develop earthquake rupture consistent with the final slip obtained by kinematic source inversion of near source strong ground motion recordings. The dislocation model of this earthquake is characterized by a patch of large slip located ~7 km south of the hypocenter (Suzuki et al. 2009). Our results for the calculation of stress drop follow a similar pattern. Using the rupture times obtained from the dynamic model of the Iwate-Miyagi Nairiku earthquake we estimated the rupture velocity as well as rupture velocity changes distribution across the fault plane based on the procedure proposed by PD09. Our results show that rupture velocity has strong variations concentrated in small patches within large slip areas (asperities). Using this dynamic model we performed the strong motion simulation at the IWTH25 borehole. We obtained that this model is able to reproduce the two HF events observed in the strong motion data. Our preliminary results suggest that the extreme acceleration pulses were induced by two strong rupture velocity acceleration events at the rupture front. References Aoi, S., T. Kunugi, and H. Fujiwara, 2008, Science, 322, 727-730. Ely, G. P., S. M. Day, and J.-B. Minster (2009), Geophys. J. Int., 177(3), 1140-1150. Pulido, N., S. Aoi, and W. Suzuki (2008), AGU Fall meeting, S33C-02. Pulido, N., and L.A. Dalguer, (2009). Estimation of the high-frequency radiation of the 2000 Tottori (Japan) earthquake based on a dynamic model of fault rupture: Application to the strong ground motion simulation, Bull. Seism. Soc. Am. 99(4), 2305-2322. Suzuki, W., S. Aoi, and H. Sekiguchi, (2009), Bull. Seism. Soc. Am. (Accepted).
NASA Astrophysics Data System (ADS)
Ayele, Atalay; Midzi, Vunganai; Ateba, Bekoa; Mulabisana, Thifhelimbilu; Marimira, Kwangwari; Hlatywayo, Dumisani J.; Akpan, Ofonime; Amponsah, Paulina; Georges, Tuluka M.; Durrheim, Ray
2013-04-01
Large magnitude earthquakes have been observed in Sub-Saharan Africa in the recent past, such as the Machaze event of 2006 (Mw, 7.0) in Mozambique and the 2009 Karonga earthquake (Mw 6.2) in Malawi. The December 13, 1910 earthquake (Ms = 7.3) in the Rukwa rift (Tanzania) is the largest of all instrumentally recorded events known to have occurred in East Africa. The overall earthquake hazard in the region is on the lower side compared to other earthquake prone areas in the globe. However, the risk level is high enough for it to receive attention of the African governments and the donor community. The latest earthquake hazard map for the sub-Saharan Africa was done in 1999 and updating is long overdue as several development activities in the construction industry is booming allover sub-Saharan Africa. To this effect, regional seismologists are working together under the GEM (Global Earthquake Model) framework to improve incomplete, inhomogeneous and uncertain catalogues. The working group is also contributing to the UNESCO-IGCP (SIDA) 601 project and assessing all possible sources of data for the catalogue as well as for the seismotectonic characteristics that will help to develop a reasonable hazard model in the region. In the current progress, it is noted that the region is more seismically active than we thought. This demands the coordinated effort of the regional experts to systematically compile all available information for a better output so as to mitigate earthquake risk in the sub-Saharan Africa.
Recent Mega-Thrust Tsunamigenic Earthquakes and PTHA
NASA Astrophysics Data System (ADS)
Lorito, S.
2013-05-01
The occurrence of several mega-thrust tsunamigenic earthquakes in the last decade, including but not limited to the 2004 Sumatra-Andaman, the 2010 Maule, and 2011 Tohoku earthquakes, has been a dramatic reminder of the limitations in our capability of assessing earthquake and tsunami hazard and risk. However, the increasingly high-quality geophysical observational networks allowed the retrieval of most accurate than ever models of the rupture process of mega-thrust earthquakes, thus paving the way for future improved hazard assessments. Probabilistic Tsunami Hazard Analysis (PTHA) methodology, in particular, is less mature than its seismic counterpart, PSHA. Worldwide recent research efforts of the tsunami science community allowed to start filling this gap, and to define some best practices that are being progressively employed in PTHA for different regions and coasts at threat. In the first part of my talk, I will briefly review some rupture models of recent mega-thrust earthquakes, and highlight some of their surprising features that likely result in bigger error bars associated to PTHA results. More specifically, recent events of unexpected size at a given location, and with unexpected rupture process features, posed first-order open questions which prevent the definition of an heterogeneous rupture probability along a subduction zone, despite of several recent promising results on the subduction zone seismic cycle. In the second part of the talk, I will dig a bit more into a specific ongoing effort for improving PTHA methods, in particular as regards epistemic and aleatory uncertainties determination, and the computational PTHA feasibility when considering the full assumed source variability. Only logic trees are usually explicated in PTHA studies, accounting for different possible assumptions on the source zone properties and behavior. The selection of the earthquakes to be actually modelled is then in general made on a qualitative basis or remains implicit, despite different methods like event trees have been used for different applications. I will define a quite general PTHA framework, based on the mixed use of logic and event trees. I will first discuss a particular class of epistemic uncertainties, i.e. those related to the parametric fault characterization in terms of geometry, kinematics, and assessment of activity rates. A systematic classification in six justification levels of epistemic uncertainty related with the existence and behaviour of fault sources will be presented. Then, a particular branch of the logic tree is chosen in order to discuss just the aleatory variability of earthquake parameters, represented with an event tree. Even so, PTHA based on numerical scenarios is a too demanding computational task, particularly when probabilistic inundation maps are needed. For trying to reduce the computational burden without under-representing the source variability, the event tree is first constructed by taking care of densely (over-)sampling the earthquake parameter space, and then the earthquakes are filtered basing on their associated tsunami impact offshore, before calculating inundation maps. I'll describe this approach by means of a case study in the Mediterranean Sea, namely the PTHA for some locations of Eastern Sicily coasts and Southern Crete coast due to potential subduction earthquakes occurring on the Hellenic Arc.
Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform
NASA Astrophysics Data System (ADS)
Wang, Y.; Ni, S.; Chen, W.
2012-12-01
Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.
A Bayesian approach to earthquake source studies
NASA Astrophysics Data System (ADS)
Minson, Sarah
Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.
NASA Astrophysics Data System (ADS)
Fan, Wenyuan; McGuire, Jeffrey J.
2018-05-01
An earthquake rupture process can be kinematically described by rupture velocity, duration and spatial extent. These key kinematic source parameters provide important constraints on earthquake physics and rupture dynamics. In particular, core questions in earthquake science can be addressed once these properties of small earthquakes are well resolved. However, these parameters of small earthquakes are poorly understood, often limited by available datasets and methodologies. The IRIS Community Wavefield Experiment in Oklahoma deployed ˜350 three component nodal stations within 40 km2 for a month, offering an unprecedented opportunity to test new methodologies for resolving small earthquake finite source properties in high resolution. In this study, we demonstrate the power of the nodal dataset to resolve the variations in the seismic wavefield over the focal sphere due to the finite source attributes of a M2 earthquake within the array. The dense coverage allows us to tightly constrain rupture area using the second moment method even for such a small earthquake. The M2 earthquake was a strike-slip event and unilaterally propagated towards the surface at 90 per cent local S- wave speed (2.93 km s-1). The earthquake lasted ˜0.019 s and ruptured Lc ˜70 m by Wc ˜45 m. With the resolved rupture area, the stress-drop of the earthquake is estimated as 7.3 MPa for Mw 2.3. We demonstrate that the maximum and minimum bounds on rupture area are within a factor of two, much lower than typical stress drop uncertainty, despite a suboptimal station distribution. The rupture properties suggest that there is little difference between the M2 Oklahoma earthquake and typical large earthquakes. The new three component nodal systems have great potential for improving the resolution of studies of earthquake source properties.
Inundation Mapping and Hazard Assessment of Tectonic and Landslide Tsunamis in Southeast Alaska
NASA Astrophysics Data System (ADS)
Suleimani, E.; Nicolsky, D.; Koehler, R. D., III
2014-12-01
The Alaska Earthquake Center conducts tsunami inundation mapping for coastal communities in Alaska, and is currently focused on the southeastern region and communities of Yakutat, Elfin Cove, Gustavus and Hoonah. This activity provides local emergency officials with tsunami hazard assessment, planning, and mitigation tools. At-risk communities are distributed along several segments of the Alaska coastline, each having a unique seismic history and potential tsunami hazard. Thus, a critical component of our project is accurate identification and characterization of potential tectonic and landslide tsunami sources. The primary tectonic element of Southeast Alaska is the Fairweather - Queen Charlotte fault system, which has ruptured in 5 large strike-slip earthquakes in the past 100 years. The 1958 "Lituya Bay" earthquake triggered a large landslide into Lituya Bay that generated a 540-m-high wave. The M7.7 Haida Gwaii earthquake of October 28, 2012 occurred along the same fault, but was associated with dominantly vertical motion, generating a local tsunami. Communities in Southeast Alaska are also vulnerable to hazards related to locally generated waves, due to proximity of communities to landslide-prone fjords and frequent earthquakes. The primary mechanisms for local tsunami generation are failure of steep rock slopes due to relaxation of internal stresses after deglaciation, and failure of thick unconsolidated sediments accumulated on underwater delta fronts at river mouths. We numerically model potential tsunami waves and inundation extent that may result from future hypothetical far- and near-field earthquakes and landslides. We perform simulations for each source scenario using the Alaska Tsunami Model, which is validated through a set of analytical benchmarks and tested against laboratory and field data. Results of numerical modeling combined with historical observations are compiled on inundation maps and used for site-specific tsunami hazard assessment by emergency planners.
Field Investigations and a Tsunami Modeling for the 1766 Marmara Sea Earthquake, Turkey
NASA Astrophysics Data System (ADS)
Aykurt Vardar, H.; Altinok, Y.; Alpar, B.; Unlu, S.; Yalciner, A. C.
2016-12-01
Turkey is located on one of the world's most hazardous earthquake zones. The northern branch of the North Anatolian fault beneath the Sea of Marmara, where the population is most concentrated, is the most active fault branch at least since late Pliocene. The Sea of Marmara region has been affected by many large tsunamigenic earthquakes; the most destructive ones are 549, 553, 557, 740, 989, 1332, 1343, 1509, 1766, 1894, 1912 and 1999 events. In order to understand and determine the tsunami potential and their possible effects along the coasts of this inland sea, detailed documentary, geophysical and numerical modelling studies are needed on the past earthquakes and their associated tsunamis whose effects are presently unknown.On the northern coast of the Sea of Marmara region, the Kucukcekmece Lagoon has a high potential to trap and preserve tsunami deposits. Within the scope of this study, lithological content, composition and sources of organic matters in the lagoon's bottom sediments were studied along a 4.63 m-long piston core recovered from the SE margin of the lagoon. The sedimentary composition and possible sources of the organic matters along the core were analysed and their results were correlated with the historical events on the basis of dating results. Finally, a tsunami scenario was tested for May 22nd 1766 Marmara Sea Earthquake by using a widely used tsunami simulation model called NAMIDANCE. The results show that the candidate tsunami deposits at the depths of 180-200 cm below the lagoons bottom were related with the 1766 (May) earthquake. This work was supported by the Scientific Research Projects Coordination Unit of Istanbul University (Project 6384) and by the EU project TRANSFER for coring.
Source process and tectonic implication of the January 20, 2007 Odaesan earthquake, South Korea
NASA Astrophysics Data System (ADS)
Abdel-Fattah, Ali K.; Kim, K. Y.; Fnais, M. S.; Al-Amri, A. M.
2014-04-01
The source process for the 20th of January 2007, Mw 4.5 Odaesan earthquake in South Korea is investigated in the low- and high-frequency bands, using velocity and acceleration waveform data recorded by the Korea Meteorological Administration Seismographic Network at distances less than 70 km from the epicenter. Synthetic Green functions are adopted for the low-frequency band of 0.1-0.3 Hz by using the wave-number integration technique and the one dimensional velocity model beneath the epicentral area. An iterative technique was performed by a grid search across the strike, dip, rake, and focal depth of rupture nucleation parameters to find the best-fit double-couple mechanism. To resolve the nodal plane ambiguity, the spatiotemporal slip distribution on the fault surface was recovered using a non-negative least-square algorithm for each set of the grid-searched parameters. The focal depth of 10 km was determined through the grid search for depths in the range of 6-14 km. The best-fit double-couple mechanism obtained from the finite-source model indicates a vertical strike-slip faulting mechanism. The NW faulting plane gives comparatively smaller root-mean-squares (RMS) error than its auxiliary plane. Slip pattern event provides simple source process due to the effect of Low-frequency that acted as a point source model. Three empirical Green functions are adopted to investigate the source process in the high-frequency band. A set of slip models was recovered on both nodal planes of the focal mechanism with various rupture velocities in the range of 2.0-4.0 km/s. Although there is a small difference between the RMS errors produced by the two orthogonal nodal planes, the SW dipping plane gives a smaller RMS error than its auxiliary plane. The slip distribution is relatively assessable by the oblique pattern recovered around the hypocenter in the high-frequency analysis; indicating a complex rupture scenario for such moderate-sized earthquake, similar to those reported for large earthquakes.
Toward Building a New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.
2015-12-01
At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.
NASA Astrophysics Data System (ADS)
Shimizu, K.; Yagi, Y.; Okuwaki, R.; Kasahara, A.
2017-12-01
The kinematic earthquake rupture models are useful to derive statistics and scaling properties of the large and great earthquakes. However, the kinematic rupture models for the same earthquake are often different from one another. Such sensitivity of the modeling prevents us to understand the statistics and scaling properties of the earthquakes. Yagi and Fukahata (2011) introduces the uncertainty of Green's function into the tele-seismic waveform inversion, and shows that the stable spatiotemporal distribution of slip-rate can be obtained by using an empirical Bayesian scheme. One of the unsolved problems in the inversion rises from the modeling error originated from an uncertainty of a fault-model setting. Green's function near the nodal plane of focal mechanism is known to be sensitive to the slight change of the assumed fault geometry, and thus the spatiotemporal distribution of slip-rate should be distorted by the modeling error originated from the uncertainty of the fault model. We propose a new method accounting for the complexity in the fault geometry by additionally solving the focal mechanism on each space knot. Since a solution of finite source inversion gets unstable with an increasing of flexibility of the model, we try to estimate a stable spatiotemporal distribution of focal mechanism in the framework of Yagi and Fukahata (2011). We applied the proposed method to the 52 tele-seismic P-waveforms of the 2013 Balochistan, Pakistan earthquake. The inverted-potency distribution shows unilateral rupture propagation toward southwest of the epicenter, and the spatial variation of the focal mechanisms shares the same pattern as the fault-curvature along the tectonic fabric. On the other hand, the broad pattern of rupture process, including the direction of rupture propagation, cannot be reproduced by an inversion analysis under the assumption that the faulting occurred on a single flat plane. These results show that the modeling error caused by simplifying the fault model is non-negligible in the tele-seismic waveform inversion of the 2013 Balochistan, Pakistan earthquake.
NASA Astrophysics Data System (ADS)
Okuwaki, R.; Yagi, Y.
2017-12-01
A seismic source model for the Mw 8.1 2017 Chiapas, Mexico, earthquake was constructed by kinematic waveform inversion using globally observed teleseismic waveforms, suggesting that the earthquake was a normal-faulting event on a steeply dipping plane, with the major slip concentrated around a relatively shallow depth of 28 km. The modeled rupture evolution showed unilateral, downdip propagation northwestward from the hypocenter, and the downdip width of the main rupture was restricted to less than 30 km below the slab interface, suggesting that the downdip extensional stresses due to the slab bending were the primary cause of the earthquake. The rupture front abruptly decelerated at the northwestern end of the main rupture where it intersected the subducting Tehuantepec Fracture Zone, suggesting that the fracture zone may have inhibited further rupture propagation.
Sharp increase in central Oklahoma seismicity 2009-2014 induced by massive wastewater injection
Keranen, Kathleen M.; Abers, Geoffrey A.; Weingarten, Matthew; Bekins, Barbara A.; Ge, Shemin
2014-01-01
Unconventional oil and gas production provides a rapidly growing energy source; however high-producing states in the United States, such as Oklahoma, face sharply rising numbers of earthquakes. Subsurface pressure data required to unequivocally link earthquakes to injection are rarely accessible. Here we use seismicity and hydrogeological models to show that distant fluid migration from high-rate disposal wells in Oklahoma is likely responsible for the largest swarm. Earthquake hypocenters occur within disposal formations and upper-basement, between 2-5 km depth. The modeled fluid pressure perturbation propagates throughout the same depth range and tracks earthquakes to distances of 35 km, with a triggering threshold of ~0.07 MPa. Although thousands of disposal wells may operate aseismically, four of the highest-rate wells likely induced 20% of 2008-2013 central US seismicity.
NASA Astrophysics Data System (ADS)
Xiong, X.; Shan, B.; Zhou, Y. M.; Wei, S. J.; Li, Y. D.; Wang, R. J.; Zheng, Y.
2017-05-01
Myanmar is drawing rapidly increasing attention from the world for its seismic hazard. The Sagaing Fault (SF), an active right-lateral strike-slip fault passing through Myanmar, has been being the source of serious seismic damage of the country. Thus, awareness of seismic hazard assessment of this region is of pivotal significance by taking into account the interaction and migration of earthquakes with respect to time and space. We investigated a seismic series comprising10 earthquakes with M > 6.5 that occurred along the SF since 1906. The Coulomb failure stress modeling exhibits significant interactions among the earthquakes. After the 1906 earthquake, eight out of nine earthquakes occurred in the positively stress-enhanced zone of the preceding earthquakes, verifying that the hypothesis of earthquake triggering is applicable on the SF. Moreover, we identified three visible positively stressed earthquake gaps on the central and southern SF, on which seismic hazard is increased.
USGS GNSS Applications to Earthquake Disaster Response and Hazard Mitigation
NASA Astrophysics Data System (ADS)
Hudnut, K. W.; Murray, J. R.; Minson, S. E.
2015-12-01
Rapid characterization of earthquake rupture is important during a disaster because it establishes which fault ruptured and the extent and amount of fault slip. These key parameters, in turn, can augment in situ seismic sensors for identifying disruption to lifelines as well as localized damage along the fault break. Differential GNSS station positioning, along with imagery differencing, are important methods for augmenting seismic sensors. During response to recent earthquakes (1989 Loma Prieta, 1992 Landers, 1994 Northridge, 1999 Hector Mine, 2010 El Mayor - Cucapah, 2012 Brawley Swarm and 2014 South Napa earthquakes), GNSS co-seismic and post-seismic observations proved to be essential for rapid earthquake source characterization. Often, we find that GNSS results indicate key aspects of the earthquake source that would not have been known in the absence of GNSS data. Seismic, geologic, and imagery data alone, without GNSS, would miss important details of the earthquake source. That is, GNSS results provide important additional insight into the earthquake source properties, which in turn help understand the relationship between shaking and damage patterns. GNSS also adds to understanding of the distribution of slip along strike and with depth on a fault, which can help determine possible lifeline damage due to fault offset, as well as the vertical deformation and tilt that are vitally important for gravitationally driven water systems. The GNSS processing work flow that took more than one week 25 years ago now takes less than one second. Formerly, portable receivers needed to be set up at a site, operated for many hours, then data retrieved, processed and modeled by a series of manual steps. The establishment of continuously telemetered, continuously operating high-rate GNSS stations and the robust automation of all aspects of data retrieval and processing, has led to sub-second overall system latency. Within the past few years, the final challenges of standardization and adaptation to the existing framework of the ShakeAlert earthquake early warning system have been met, such that real-time GNSS processing and input to ShakeAlert is now routine and in use. Ongoing adaptation and testing of algorithms remain the last step towards fully operational incorporation of GNSS into ShakeAlert by USGS and its partners.
Point-source inversion techniques
NASA Astrophysics Data System (ADS)
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Probabilistic seismic hazard assessment for northern Southeast Asia
NASA Astrophysics Data System (ADS)
Chan, C. H.; Wang, Y.; Kosuwan, S.; Nguyen, M. L.; Shi, X.; Sieh, K.
2016-12-01
We assess seismic hazard for northern Southeast Asia through constructing an earthquake and fault database, conducting a series of ground-shaking scenarios and proposing regional seismic hazard maps. Our earthquake database contains earthquake parameters from global and local seismic catalogues, including the ISC, ISC-GEM, the global ANSS Comprehensive Catalogues, Seismological Bureau, Thai Meteorological Department, Thailand, and Institute of Geophysics Vietnam Academy of Science and Technology, Vietnam. To harmonize the earthquake parameters from various catalogue sources, we remove duplicate events and unify magnitudes into the same scale. Our active fault database include active fault data from previous studies, e.g. the active fault parameters determined by Wang et al. (2014), Department of Mineral Resources, Thailand, and Institute of Geophysics, Vietnam Academy of Science and Technology, Vietnam. Based on the parameters from analysis of the databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and time elapsed of last events), we determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the felt intensities of historical earthquakes to the modelled ground motions using ground motion prediction equations (GMPEs). By incorporating the best-fitting GMPEs and site conditions, we utilized site effect and assessed probabilistic seismic hazard. The highest seismic hazard is in the region close to the Sagaing Fault, which cuts through some major cities in central Myanmar. The northern segment of Sunda megathrust, which could potentially cause M8-class earthquake, brings significant hazard along the Western Coast of Myanmar and eastern Bangladesh. Besides, we conclude a notable hazard level in northern Vietnam and the boundary between Myanmar, Thailand and Laos, due to a series of strike-slip faults, which could potentially cause moderate-large earthquakes. Note that although much of the region has a low probability of damaging shaking, low-probability events have resulted in much destruction recently in SE Asia (e.g. 2008 Wenchuan, 2015 Sabah earthquakes).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitarka, A.
In this project we developed GEN_SRF4 a computer program for generating kinematic rupture models, compatible with the SRF format, using Irikura and Miyake (2011) asperity-based earthquake rupture model (IM2011, hereafter). IM2011, also known as Irkura’s recipe, has been widely used to model and simulate ground motion from earthquakes in Japan. An essential part of the method is its kinematic rupture generation technique, which is based on a deterministic rupture asperity modeling approach. The source model simplicity and efficiency of IM2011 at reproducing ground motion from earthquakes recorded in Japan makes it attractive to developers and users of the Southern Californiamore » Earthquake Center Broadband Platform (SCEC BB platform). Besides writing the code the objective of our study was to test the transportability of IM2011 to broadband simulation methods used by the SCEC BB platform. Here we test it using the Graves and Pitarka (2010) method, implemented in the platform. We performed broadband (0.1- -10 Hz) ground motion simulations for a M6.7 scenario earthquake using rupture models produced with both GEN_SRF4 and rupture generator of Graves and Pitarka (2016), (GP2016 hereafter). In the simulations we used the same Green’s functions, and same high frequency approach for calculating the low-frequency and high-frequency parts of ground motion, respectively.« less
Initial rupture of earthquakes in the 1995 Ridgecrest, California sequence
Mori, J.; Kanamori, H.
1996-01-01
Close examination of the P waves from earthquakes ranging in size across several orders of magnitude shows that the shape of the initiation of the velocity waveforms is independent of the magnitude of the earthquake. A model in which earthquakes of all sizes have similar rupture initiation can explain the data. This suggests that it is difficult to estimate the eventual size of an earthquake from the initial portion of the waveform. Previously reported curvature seen in the beginning of some velocity waveforms can be largely explained as the effect of anelastic attenuation; thus there is little evidence for a departure from models of simple rupture initiation that grow dynamically from a small region. The results of this study indicate that any "precursory" radiation at seismic frequencies must emanate from a source region no larger than the equivalent of a M0.5 event (i.e. a characteristic length of ???10 m). The size of the nucleation region for magnitude 0 to 5 earthquakes thus is not resolvable with the standard seismic instrumentation deployed in California. Copyright 1996 by the American Geophysical Union.
Earthquake processes in the Rainbow Mountain-Fairview Peak-Dixie Valley, Nevada, region 1954-1959
NASA Astrophysics Data System (ADS)
Doser, Diane I.
1986-11-01
The 1954 Rainbow Mountain-Fairview Peak-Dixie Valley, Nevada, sequence produced the most extensive pattern of surface faults in the intermountain region in historic time. Five earthquakes of M>6.0 occurred during the first 6 months of the sequence, including the December 16, 1954, Fairview Peak (M = 7.1) and Dixie Valley (M = 6.8) earthquakes. Three 5.5≤M≤6.5 earthquakes occurred in the region in 1959, but none exhibited surface faulting. The results of the modeling suggest that the M>6.5 earthquakes of this sequence are complex events best fit by multiple source-time functions. Although the observed surface displacements for the July and August 1954 events showed only dip-slip motion, the fault plane solutions and waveform modeling suggest the earthquakes had significant components of right-lateral strike-slip motion (rakes of -135° to -145°). All of the earthquakes occurred along high-angle faults with dips of 40° to 70°. Seismic moments for individual subevents of the sequence range from 8.0 × 1017 to 2.5 × 1019 N m. Stress drops for the subevents, including the Fairview Peak subevents, were between 0.7 and 6.0 MPa.
NASA Astrophysics Data System (ADS)
Pulinets, S. A.; Ouzounov, D. P.; Karelin, A. V.; Davidenko, D. V.
2015-07-01
This paper describes the current understanding of the interaction between geospheres from a complex set of physical and chemical processes under the influence of ionization. The sources of ionization involve the Earth's natural radioactivity and its intensification before earthquakes in seismically active regions, anthropogenic radioactivity caused by nuclear weapon testing and accidents in nuclear power plants and radioactive waste storage, the impact of galactic and solar cosmic rays, and active geophysical experiments using artificial ionization equipment. This approach treats the environment as an open complex system with dissipation, where inherent processes can be considered in the framework of the synergistic approach. We demonstrate the synergy between the evolution of thermal and electromagnetic anomalies in the Earth's atmosphere, ionosphere, and magnetosphere. This makes it possible to determine the direction of the interaction process, which is especially important in applications related to short-term earthquake prediction. That is why the emphasis in this study is on the processes proceeding the final stage of earthquake preparation; the effects of other ionization sources are used to demonstrate that the model is versatile and broadly applicable in geophysics.
NASA Astrophysics Data System (ADS)
Gunawan, I.; Cummins, P. R.; Ghasemi, H.; Suhardjono, S.
2012-12-01
Indonesia is very prone to natural disasters, especially earthquakes, due to its location in a tectonically active region. In September-October 2009 alone, intraslab and crustal earthquakes caused the deaths of thousands of people, severe infrastructure destruction and considerable economic loss. Thus, both intraslab and crustal earthquakes are important sources of earthquake hazard in Indonesia. Analysis of response spectra for these intraslab and crustal earthquakes are needed to yield more detail about earthquake properties. For both types of earthquakes, we have analysed available Indonesian seismic waveform data to constrain source and path parameters - i.e., low frequency spectral level, Q, and corner frequency - at reference stations that appear to be little influenced by site response.. We have considered these analyses for the main shocks as well as several aftershocks. We obtain corner frequencies that are reasonably consistent with the constant stress drop hypothesis. Using these results, we consider using them to extract information about site response form other stations form the Indonesian strong motion network that appear to be strongly affected by site response. Such site response data, as well as earthquake source parameters, are important for assessing earthquake hazard in Indonesia.
Constraints on the rupture process of the 17 August 1999 Izmit earthquake
NASA Astrophysics Data System (ADS)
Bouin, M.-P.; Clévédé, E.; Bukchin, B.; Mostinski, A.; Patau, G.
2003-04-01
Kinematic and static models of the 17 August 1999 Izmit earthquake published in the literature are quite different from one to each other. In order to extract the characteristic features of this event, we determine the integral estimates of the geometry, source duration and rupture propagation of this event. Those estimates are given by the stress glut moments of total degree 2 inverting long period surface wave (LPSW) amplitude spectra (Bukchin, 1995). We draw comparisons with the integral estimates deduced from kinematic models obtained by inversion of strong motion data set and/or teleseismic body wave (Bouchon et al, 2002; Delouis et al., 2000; Yagi and Kukuchi, 2000; Sekiguchi and Iwata, 2002). While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. Using a simple equivalent kinematic model, we reproduce the integral estimates of the rupture process by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the LPSW solution strongly suggest that: - There was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; - The rupture velocity decreases on this segment. We will discuss how these results allow to enlighten the scattering of source process published for this earthquake.
Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization
NASA Astrophysics Data System (ADS)
Lee, Kyungbook; Song, Seok Goo
2017-09-01
Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.
NASA Astrophysics Data System (ADS)
Fujihara, S.; Korenaga, M.; Kawaji, K.; Akiyama, S.
2013-12-01
We try to compare and evaluate the nature of tsunami generation and seismic wave generation in occurrence of the 2011 Tohoku-Oki earthquake (hereafter, called as TOH11), in terms of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms. Since 1970's, the nature of "tsunami earthquakes" has been discussed in many researches (e.g. Kanamori, 1972; Kanamori and Kikuchi, 1993; Kikuchi and Kanamori, 1995; Ide et al., 1993; Satake, 1994) mostly based on analysis of seismic waveform data , in terms of the "slow" nature of tsunami earthquakes (e.g., the 1992 Nicaragura earthquake). Although TOH11 is not necessarily understood as a tsunami earthquake, TOH11 is one of historical earthquakes that simultaneously generated large seismic waves and tsunami. Also, TOH11 is one of earthquakes which was observed both by seismic observation network and tsunami observation network around the Japanese islands. Therefore, for the purpose of analyzing the nature of tsunami generation, we try to utilize tsunami waveform data as much as possible. In our previous studies of TOH11 (Fujihara et al., 2012a; Fujihara et al., 2012b), we inverted tsunami waveforms at GPS wave gauges of NOWPHAS to image the spatio-temporal slip distribution. The "temporal" nature of our tsunami source model is generally consistent with the other tsunami source models (e.g., Satake et al, 2013). For seismic waveform inversion based on 1-D structure, here we inverted broadband seismograms at GSN stations based on the teleseismic body-wave inversion scheme (Kikuchi and Kanamori, 2003). Also, for seismic waveform inversion considering the inhomogeneous internal structure, we inverted strong motion seismograms at K-NET and KiK-net stations, based on 3-D Green's functions (Fujihara et al., 2013a; Fujihara et al., 2013b). The gross "temporal" nature of our seismic source models are generally consistent with the other seismic source models (e.g., Yoshida et al., 2011; Ide at al., 2011; Yagi and Fukahata, 2011; Suzuki et al., 2011). The comparison of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms, suggested that there was the time period common to both seismic wave generation and tsunami generation followed by the time period unique to tsunami generation. At this point, we think that comparison of the absolute values of moment rates is not so meaningful between tsunami waveform inversion and seismic waveform inversion, because of general ambiguity of rigidity values of each subfault in the fault region (assuming the rigidity value of 30 GPa of Yoshida et al (2011)). Considering this, the normalized value of moment rate function was also evaluated and it does not change the general feature of two moment rate functions in terms of duration property. Furthermore, the results suggested that tsunami generation process apparently took more time than seismic wave generation process did. Tsunami can be generated even by "extra" motions resulting from many suggested abnormal mechanisms. These extra motions may be attribute to the relatively larger-scale tsunami generation than expected from the magnitude level from seismic ground motion, and attribute to the longer duration of tsunami generation process.
Estimation of ground motion for Bhuj (26 January 2001; Mw 7.6 and for future earthquakes in India
Singh, S.K.; Bansal, B.K.; Bhattacharya, S.N.; Pacheco, J.F.; Dattatrayam, R.S.; Ordaz, M.; Suresh, G.; ,; Hough, S.E.
2003-01-01
Only five moderate and large earthquakes (Mw ???5.7) in India-three in the Indian shield region and two in the Himalayan arc region-have given rise to multiple strong ground-motion recordings. Near-source data are available for only two of these events. The Bhuj earthquake (Mw 7.6), which occurred in the shield region, gave rise to useful recordings at distances exceeding 550 km. Because of the scarcity of the data, we use the stochastic method to estimate ground motions. We assume that (1) S waves dominate at R < 100 km and Lg waves at R ??? 100 km, (2) Q = 508f0.48 is valid for the Indian shield as well as the Himalayan arc region, (3) the effective duration is given by fc-1 + 0.05R, where fc is the corner frequency, and R is the hypocentral distance in kilometer, and (4) the acceleration spectra are sharply cut off beyond 35 Hz. We use two finite-source stochastic models. One is an approximate model that reduces to the ??2-source model at distances greater that about twice the source dimension. This model has the advantage that the ground motion is controlled by the familiar stress parameter, ????. In the other finite-source model, which is more reliable for near-source ground-motion estimation, the high-frequency radiation is controlled by the strength factor, sfact, a quantity that is physically related to the maximum slip rate on the fault. We estimate ???? needed to fit the observed Amax and Vmax data of each earthquake (which are mostly in the far field). The corresponding sfact is obtained by requiring that the predicted curves from the two models match each other in the far field up to a distance of about 500 km. The results show: (1) The ???? that explains Amax data for shield events may be a function of depth, increasing from ???50 bars at 10 km to ???400 bars at 36 km. The corresponding sfact values range from 1.0-2.0. The ???? values for the two Himalayan arc events are 75 and 150 bars (sfact = 1.0 and 1.4). (2) The ???? required to explain Vmax data is, roughly, half the corresponding value for Amax, while the same sfact explains both sets of data. (3) The available far-field Amax and Vmax data for the Bhuj mainshock are well explained by ???? = 200 and 100 bars, respectively, or, equivalently, by sfact = 1.4. The predicted Amax and Vmax in the epicentral region of this earthquake are 0.80 to 0.95 g and 40 to 55 cm/sec, respectively.
Gallen, Sean F.; Clark, Marin K.; Godt, Jonathan W.; Roback, Kevin; Niemi, Nathan A
2017-01-01
The 25 April 2015 Mw 7.8 Gorkha earthquake produced strong ground motions across an approximately 250 km by 100 km swath in central Nepal. To assist disaster response activities, we modified an existing earthquake-triggered landslide model based on a Newmark sliding block analysis to estimate the extent and intensity of landsliding and landslide dam hazard. Landslide hazard maps were produced using Shuttle Radar Topography Mission (SRTM) digital topography, peak ground acceleration (PGA) information from the U.S. Geological Survey (USGS) ShakeMap program, and assumptions about the regional rock strength based on end-member values from previous studies. The instrumental record of seismicity in Nepal is poor, so PGA estimates were based on empirical Ground Motion Prediction Equations (GMPEs) constrained by teleseismic data and felt reports. We demonstrate a non-linear dependence of modeled landsliding on aggregate rock strength, where the number of landslides decreases exponentially with increasing rock strength. Model estimates are less sensitive to PGA at steep slopes (> 60°) compared to moderate slopes (30–60°). We compare forward model results to an inventory of landslides triggered by the Gorkha earthquake. We show that moderate rock strength inputs over estimate landsliding in regions beyond the main slip patch, which may in part be related to poorly constrained PGA estimates for this event at far distances from the source area. Directly above the main slip patch, however, the moderate strength model accurately estimates the total number of landslides within the resolution of the model (landslides ≥ 0.0162 km2; observed n = 2214, modeled n = 2987), but the pattern of landsliding differs from observations. This discrepancy is likely due to the unaccounted for effects of variable material strength and local topographic amplification of strong ground motion, as well as other simplifying assumptions about source characteristics and their relationship to landsliding.
Pre-earthquake magnetic pulses
NASA Astrophysics Data System (ADS)
Scoville, J.; Heraud, J.; Freund, F.
2015-08-01
A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earthquakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, H.H.M.; Chen, C.H.S.
1990-04-16
An assessment of the seismic hazard that exists along the major crude oil pipeline running through the New Madrid seismic zone from southeastern Louisiana to Patoka, Illinois is examined in the report. An 1811-1812 type New Madrid earthquake with moment magnitude 8.2 is assumed to occur at three locations where large historical earthquakes have occurred. Six pipeline crossings of the major rivers in West Tennessee are chosen as the sites for hazard evaluation because of the liquefaction potential at these sites. A seismologically-based model is used to predict the bedrock accelerations. Uncertainties in three model parameters, i.e., stress parameter, cutoffmore » frequency, and strong-motion duration are included in the analysis. Each parameter is represented by three typical values. From the combination of these typical values, a total of 27 earthquake time histories can be generated for each selected site due to an 1811-1812 type New Madrid earthquake occurring at a postulated seismic source.« less
NASA Astrophysics Data System (ADS)
Iinuma, Takeshi; Hino, Ryota; Uchida, Naoki; Nakamura, Wataru; Kido, Motoyuki; Osada, Yukihito; Miura, Satoshi
2016-11-01
Large interplate earthquakes are often followed by postseismic slip that is considered to occur in areas surrounding the coseismic ruptures. Such spatial separation is expected from the difference in frictional and material properties in and around the faults. However, even though the 2011 Tohoku Earthquake ruptured a vast area on the plate interface, the estimation of high-resolution slip is usually difficult because of the lack of seafloor geodetic data. Here using the seafloor and terrestrial geodetic data, we investigated the postseismic slip to examine whether it was spatially separated with the coseismic slip by applying a comprehensive finite-element method model to subtract the viscoelastic components from the observed postseismic displacements. The high-resolution co- and postseismic slip distributions clarified the spatial separation, which also agreed with the activities of interplate and repeating earthquakes. These findings suggest that the conventional frictional property model is valid for the source region of gigantic earthquakes.
Iinuma, Takeshi; Hino, Ryota; Uchida, Naoki; Nakamura, Wataru; Kido, Motoyuki; Osada, Yukihito; Miura, Satoshi
2016-01-01
Large interplate earthquakes are often followed by postseismic slip that is considered to occur in areas surrounding the coseismic ruptures. Such spatial separation is expected from the difference in frictional and material properties in and around the faults. However, even though the 2011 Tohoku Earthquake ruptured a vast area on the plate interface, the estimation of high-resolution slip is usually difficult because of the lack of seafloor geodetic data. Here using the seafloor and terrestrial geodetic data, we investigated the postseismic slip to examine whether it was spatially separated with the coseismic slip by applying a comprehensive finite-element method model to subtract the viscoelastic components from the observed postseismic displacements. The high-resolution co- and postseismic slip distributions clarified the spatial separation, which also agreed with the activities of interplate and repeating earthquakes. These findings suggest that the conventional frictional property model is valid for the source region of gigantic earthquakes. PMID:27853138
NASA Astrophysics Data System (ADS)
Mohamed, Gad-Elkareem Abdrabou; Omar, Khaled
2014-06-01
The southern part of the Gulf of Suez is one of the most seismically active areas in Egypt. On Saturday November 19, 2011 at 07:12:15 (GMT) an earthquake of ML 4.6 occurred in southwest Sharm El-Sheikh, Egypt. The quake has been felt at Sharm El-Sheikh city while no casualties were reported. The instrumental epicenter is located at 27.69°N and 34.06°E. Seismic moment is 1.47 E+22 dyne cm, corresponding to a moment magnitude Mw 4.1. Following a Brune model, the source radius is 101.36 m with an average dislocation of 0.015 cm and a 0.06 MPa stress drop. The source mechanism from a fault plane solution shows a normal fault, the actual fault plane is strike 358, dip 34 and rake -60, the computer code ISOLA is used. Twenty seven small and micro earthquakes (1.5 ⩽ ML ⩽ 4.2) were also recorded by the Egyptian National Seismological Network (ENSN) from the same region. We estimate the source parameters for these earthquakes using displacement spectra. The obtained source parameters include seismic moments of 2.77E+16-1.47E+22 dyne cm, stress drops of 0.0005-0.0617 MPa and relative displacement of 0.0001-0.0152 cm.
NASA Astrophysics Data System (ADS)
Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.
2012-04-01
We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.
Strong Ground Motion Generation during the 2011 Tohoku-Oki Earthquake
NASA Astrophysics Data System (ADS)
Asano, K.; Iwata, T.
2011-12-01
Strong ground motions during the 2011 Tohoku-Oki earthquake (Mw9.0) were densely observed by the strong motion observation networks all over Japan. Seeing the acceleration and velocity waveforms observed at strong stations in northeast Japan along the source region, those ground motions are characterized by plural wave packets with duration of about twenty seconds. Particularly, two wave packets separated by about fifty seconds could be found on the records in the northern part of the damaged area, whereas only one significant wave packets could be recognized on the records in the southern part of the damaged area. The record section shows four isolated wave packets propagating from different locations to north and south, and it gives us a hint of the strong motion generation process on the source fault which is related to the heterogeneous rupture process in the scale of tens of kilometers. In order to solve it, we assume that each isolated wave packet is contributed by the corresponding strong motion generation area (SMGA). It is a source patch whose slip velocity is larger than off the area (Miyake et al., 2003). That is, the source model of the 2011 Tohoku-Oki earthquake consists of four SMGAs. The SMGA source model has succeeded in reproducing broadband strong ground motions for past subduction-zone events (e.g., Suzuki and Iwata, 2007). The target frequency range is set to be 0.1-10 Hz in this study as this range is significantly related to seismic damage generation to general man-made structures. First, we identified the rupture starting points of each SMGA by picking up the onset of individual packets. The source fault plane is set following the GCMT solution. The first two SMGAs were located approximately 70 km and 30 km west of the hypocenter. The third and forth SMGAs were located approximately 160 km and 230 km southwest of the hypocenter. Then, the model parameters (size, rise time, stress drop, rupture velocity, rupture propagation pattern) of these four SMGAs were determined by waveform modeling using the empirical Green's function method (Irikura, 1986). The first and second SMGAs are located close to each other, and they are partially overlapped though the difference in the rupture time between them is more than 40 s. Those two SMGA appear to be included in the source region of the past repeating Miyagi-Oki subduction-zone event in 1936. The third and fourth SMGAs appear to be located in the source region of the past Fukushima-Oki events in 1938. Each of Those regions has been expected to cause next major earthquakes in the long-term evaluation. The obtained source model explains the acceleration, velocity, and displacement time histories in the target frequency range at most stations well. All of four SMGAs exist apparently outside of the large slip area along the trench east of the hypocenter, which was estimated by the seismic, geodetic, and tsunami inversion analyses, and this large slip zone near the trench does not contribute to strong motion much. At this point, we can conclude that the 2011 Tohoku-Oki earthquake has a possibility to be a complex event rupturing multiple preexisting asperities in terms of strong ground motion generation. It should be helpful to validate and improve the applicability of the strong motion prediction recipe for great subduction-zone earthquakes.
Exploring variations of earthquake moment on patches with heterogeneous strength
NASA Astrophysics Data System (ADS)
Lin, Y. Y.; Lapusta, N.
2016-12-01
Finite-fault inversions show that earthquake slip is typically non-uniform over the ruptured region, likely due to heterogeneity of the earthquake source. Observations also show that events from the same fault area can have the same source duration but different magnitude ranging from 0.0 to 2.0 (Lin et al., GJI, 2016). Strong heterogeneity in strength over a patch could provide a potential explanation of such behavior, with the event duration controlled by the size of the patch and event magnitude determined by how much of the patch area has been ruptured. To explore this possibility, we numerically simulate earthquake sequences on a rate-and-state fault, with a seismogenic patch governed by steady-state velocity-weakening friction surrounded by a steady-state velocity-strengthening region. The seismogenic patch contains strong variations in strength due to variable normal stress. Our long-term simulations of slip in this model indeed generate sequences of earthquakes of various magnitudes. In some seismic events, dynamic rupture cannot overcome areas with higher normal strength, and smaller events result. When the higher-strength areas are loaded by previous slip and rupture, larger events result, as expected. Our current work is directed towards exploring a range of such models, determining the variability in the seismic moment that they can produce, and determining the observable properties of the resulting events.
Peterson, M.D.; Mueller, C.S.
2011-01-01
The USGS National Seismic Hazard Maps are updated about every six years by incorporating newly vetted science on earthquakes and ground motions. The 2008 hazard maps for the central and eastern United States region (CEUS) were updated by using revised New Madrid and Charleston source models, an updated seismicity catalog and an estimate of magnitude uncertainties, a distribution of maximum magnitudes, and several new ground-motion prediction equations. The new models resulted in significant ground-motion changes at 5 Hz and 1 Hz spectral acceleration with 5% damping compared to the 2002 version of the hazard maps. The 2008 maps have now been incorporated into the 2009 NEHRP Recommended Provisions, the 2010 ASCE-7 Standard, and the 2012 International Building Code. The USGS is now planning the next update of the seismic hazard maps, which will be provided to the code committees in December 2013. Science issues that will be considered for introduction into the CEUS maps include: 1) updated recurrence models for New Madrid sources, including new geodetic models and magnitude estimates; 2) new earthquake sources and techniques considered in the 2010 model developed by the nuclear industry; 3) new NGA-East ground-motion models (currently under development); and 4) updated earthquake catalogs. We will hold a regional workshop in late 2011 or early 2012 to discuss these and other issues that will affect the seismic hazard evaluation in the CEUS.
SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R E; Mayeda, K; Walter, W R
2007-07-10
The objectives of this study are to improve low-magnitude regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge of source scaling at small magnitudes (i.e., m{sub b}more » < {approx}4.0) is poorly resolved. It is not clear whether different studies obtain contradictory results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods. We begin by developing and improving the two different methods, and then in future years we will apply them both to each set of earthquakes. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But there are only a limited number of earthquakes that are recorded locally, by sufficient stations to give good azimuthal coverage, and have very closely located smaller earthquakes that can be used as an empirical Green's function (EGF) to remove path effects. In contrast, coda waves average radiation from all directions so single-station records should be adequate, and previous work suggests that the requirements for the EGF event are much less stringent. We can study more earthquakes using the coda-wave methods, while using direct wave methods for the best recorded subset of events so as to investigate any differences between the results of the two approaches. Finding 'perfect' EGF events for direct wave analysis is difficult, as is ascertaining the quality of a particular EGF event. We develop a multi-taper method to obtain time-domain source-time-functions by frequency division. If an earthquake and EGF event pair are able to produce a clear, time-domain source pulse then we accept the EGF event. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We use the well-recorded sequence of aftershocks of the M5 Au Sable Forks, NY, earthquake to test the method and also to obtain some of the first accurate source parameters for small earthquakes in eastern North America. We find that the stress drops are high, confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We simplify and improve the coda wave analysis method by calculating spectral ratios between different sized earthquakes. We first compare spectral ratio performance between local and near-regional S and coda waves in the San Francisco Bay region for moderate-sized events. The average spectral ratio standard deviations using coda are {approx}0.05 to 0.12, roughly a factor of 3 smaller than direct S-waves for 0.2 < f < 15.0 Hz. Also, direct wave analysis requires collocated pairs of earthquakes whereas the event-pairs (Green's function and target events) can be separated by {approx}25 km for coda amplitudes without any appreciable degradation. We then apply coda spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks. We observe a clear departure from self-similarity, consistent with previous studies using similar regional datasets.« less
The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation
NASA Astrophysics Data System (ADS)
Goulet, C.; Silva, F.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.
2015-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100Hz) ground motions for earthquakes at regional scales. The BBP scientific software modules implement kinematic rupture generation, low and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, seismogram ground motion amplitude calculations, and goodness of fit measurements. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground motion seismograms, using multiple alternative ground motion simulation methods, and software utilities that can generate plots, charts, and maps. The BBP has been developed over the last five years in a collaborative scientific, engineering, and software development project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The SCEC BBP software released in 2015 can be compiled and run on recent Linux systems with GNU compilers. It includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, updated ground motion simulation methods, and a simplified command line user interface.
Centroid-moment tensor inversions using high-rate GPS waveforms
NASA Astrophysics Data System (ADS)
O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.
2012-10-01
Displacement time-series recorded by Global Positioning System (GPS) receivers are a new type of near-field waveform observation of the seismic source. We have developed an inversion method which enables the recovery of an earthquake's mechanism and centroid coordinates from such data. Our approach is identical to that of the 'classical' Centroid-Moment Tensor (CMT) algorithm, except that we forward model the seismic wavefield using a method that is amenable to the efficient computation of synthetic GPS seismograms and their partial derivatives. We demonstrate the validity of our approach by calculating CMT solutions using 1 Hz GPS data for two recent earthquakes in Japan. These results are in good agreement with independently determined source models of these events. With wider availability of data, we envisage the CMT algorithm providing a tool for the systematic inversion of GPS waveforms, as is already the case for teleseismic data. Furthermore, this general inversion method could equally be applied to other near-field earthquake observations such as those made using accelerometers.
Geist, Eric L.
2014-01-01
Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.
NASA Astrophysics Data System (ADS)
Takiguchi, M.; Asano, K.; Iwata, T.
2010-12-01
Two M7 class subduction zone earthquakes have occurred in the Ibaraki-ken-oki region, northeast Japan, at 23:23 on July 23, 1982 JST (Mw7.0; 1982MS) and at 01:45 on May 8, 2008 JST (Mw6.8; 2008MS). It has been reported that, from the results of the teleseismic waveform inversion, the rupture of the asperity repeated (HERP, 2010). We estimated the source processes of these earthquakes in detail by analyzing the strong motion records and discussed how much the source characteristics of the two earthquakes repeated. First, we estimated the source model of 2008MS following the method of Miyake et al. (2003). The best-fit set of the model parameters was determined by a grid search using forward modeling of broad-band ground motions. A single 12.6 km × 12.6 km rectangular Strong Motion Generation Area (SMGA, Miyake et al., 2003) was estimated. The rupture of the SMGA of 2008MS (2008SMGA) started from the hypocenter and propagated mainly to northeast. Next, we estimated the source model of 1982MS. We compared the waveforms of 1982MS and 2008MS recorded at the same stations and found the initial rupture phase before the main rupture phase on the waveforms of 1982MS. The travel time analysis showed that the main rupture of the 1982MS started approximately 33 km west of the hypocenter at about 11s after the origin time. The main rupture starting point was located inside 2008SMGA, suggesting that the two SMGAs overlapped in part. The seismic moment ratio of 1982MS to 2008MS was approximately 1.6, and we also found the observed acceleration amplitude spectra of 1982MS were 1.5 times higher than those of 2008MS in the available frequency range. We performed the waveform modeling for 1982MS with a constraint of these ratios. A single rectangular SMGA (1982SMGA) was estimated for the main rupture, which had the same size and the same rupture propagation direction as those of 2008SMGA. However, the estimated stress drop or average slip amount of 1982SMGA was 1.5 times larger than those of 2008SMGA.
Fully probabilistic earthquake source inversion on teleseismic scales
NASA Astrophysics Data System (ADS)
Stähler, Simon; Sigloch, Karin
2017-04-01
Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.
Delorey, Andrew; Frankel, Arthur; Liu, Pengcheng; Stephenson, William J.
2014-01-01
We ran finite‐difference earthquake simulations for great subduction zone earthquakes in Cascadia to model the effects of source and path heterogeneity for the purpose of improving strong‐motion predictions. We developed a rupture model for large subduction zone earthquakes based on a k−2 slip spectrum and scale‐dependent rise times by representing the slip distribution as the sum of normal modes of a vibrating membrane.Finite source and path effects were important in determining the distribution of strong motions through the locations of the hypocenter, subevents, and crustal structures like sedimentary basins. Some regions in Cascadia appear to be at greater risk than others during an event due to the geometry of the Cascadia fault zone relative to the coast and populated regions. The southern Oregon coast appears to have increased risk because it is closer to the locked zone of the Cascadia fault than other coastal areas and is also in the path of directivity amplification from any rupture propagating north to south in that part of the subduction zone, and the basins in the Puget Sound area are efficiently amplified by both north and south propagating ruptures off the coast of western Washington. We find that the median spectral accelerations at 5 s period from the simulations are similar to that of the Zhao et al. (2006) ground‐motion prediction equation, although our simulations predict higher amplitudes near the region of greatest slip and in the sedimentary basins, such as the Seattle basin.
Investigation of Finite Sources through Time Reversal
NASA Astrophysics Data System (ADS)
Kremers, S.; Brietzke, G.; Igel, H.; Larmat, C.; Fichtner, A.; Johnson, P. A.; Huang, L.
2008-12-01
Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the source point and other information might be inferred. In this study, the backward propagation is performed numerically using a spectral element code. We investigate the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, location of asperities, rupture velocity etc.). We use synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice- rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of relaxing the ignorance to prior source information (e.g., origin time, hypocenter, fault location, etc.) on the results of the time reversal process.
NASA Astrophysics Data System (ADS)
Kozłowska, Maria; Orlecka-Sikora, Beata; Kwiatek, Grzegorz; Boettcher, Margaret S.; Dresen, Georg
2015-01-01
Static stress changes following large earthquakes are known to affect the rate and distribution of aftershocks, yet this process has not been thoroughly investigated for nanoseismicity and picoseismicity at centimeter length scales. Here we utilize a unique data set of M ≥ -3.4 earthquakes following a Mw 2.2 earthquake in Mponeng gold mine, South Africa, that was recorded during a quiet interval in the mine to investigate if rate- and state-based modeling is valid for shallow, mining-induced seismicity. We use Dieterich's (1994) rate- and state-dependent formulation for earthquake productivity, which requires estimation of four parameters: (1) Coulomb stress changes due to the main shock, (2) the reference seismicity rate, (3) frictional resistance parameter, and (4) the duration of aftershock relaxation time. Comparisons of the modeled spatiotemporal patterns of seismicity based on two different source models with the observed distribution show that while the spatial patterns match well, the rate of modeled aftershocks is lower than the observed rate. To test our model, we used three metrics of the goodness-of-fit evaluation. The null hypothesis, of no significant difference between modeled and observed seismicity rates, was only rejected in the depth interval containing the main shock. Results show that mining-induced earthquakes may be followed by a stress relaxation expressed through aftershocks located on the rupture plane and in regions of positive Coulomb stress change. Furthermore, we demonstrate that the main features of the temporal and spatial distributions of very small, mining-induced earthquakes can be successfully determined using rate- and state-based stress modeling.
TEM PSHA2015 Reliability Assessment
NASA Astrophysics Data System (ADS)
Lee, Y.; Wang, Y. J.; Chan, C. H.; Ma, K. F.
2016-12-01
The Taiwan Earthquake Model (TEM) developed a new probabilistic seismic hazard analysis (PSHA) for determining the probability of exceedance (PoE) of ground motion over a specified period in Taiwan. To investigate the adequacy of the seismic source parameters adopted in the 2015 PSHA of the TEM (TEM PSHA2015), we conducted several tests of the seismic source models. The observed maximal peak ground acceleration (PGA) of the ML > 4.0 mainshocks in the 23-year data period of 1993-2015 were used to test the predicted PGA of PSHA from the areal and subduction zone sources with the time-independent Poisson assumption. This comparison excluded the observations from 1999 Chi-Chi earthquake, as this was the only earthquake associated with the identified active fault in this past 23 years. We used tornado diagrams to analyze the sensitivities of these source parameters to the ground motion values of the PSHA. This study showed that the predicted PGA for a 63% PoE in the 23-year period corresponded to the empirical PGA and the predicted numbers of PGA exceedances to a threshold value 0.1g close to the observed numbers, confirming the parameter applicability for the areal and subduction zone sources. We adopted the disaggregation analysis from a hazard map to determine the contribution of the individual seismic sources to hazard for six metropolitan cities in Taiwan. The sensitivity tests of the seismogenic structure parameters indicated that the slip rate and maximum magnitude are dominant factors for the TEM PSHA2015. For densely populated faults in SW Taiwan, maximum magnitude is more sensitive than the slip rate, giving the concern on the possible multiple fault segments rupture with larger magnitude in this area, which was not yet considered in TEM PSHA2015. The source category disaggregation also suggested that special attention is necessary for subduction zone earthquakes for long-period shaking seismic hazards in Northern Taiwan.
NASA Astrophysics Data System (ADS)
Han, J.; Zhou, S.
2017-12-01
Asia, located in the conjoined areas of Eurasian, Pacific, and Indo-Australian plates, is the continent with highest seismicity. Earthquake catalogue on the bases of modern seismic network recordings has been established since around 1970 in Asia and the earthquake catalogue before 1970 was much more inaccurate because of few stations. With a history of less than 50 years of modern earthquake catalogue, researches in seismology are quite limited. After the appearance of improved Earth velocity structure model, modified locating method and high-accuracy Optical Character Recognition technique, travel time data of earthquakes from 1900 to 1970 can be included in research and more accurate locations can be determined for historical earthquakes. Hence, parameters of these historical earthquakes can be obtained more precisely and some research method such as ETAS model can be used in a much longer time scale. This work focuses on the following three aspects: (1) Relocating more than 300 historical major earthquakes (M≥7.0) in Asia based on the Shide Circulars, International Seismological Summary and EHB Bulletin instrumental records between 1900 and 1970. (2) Calculating the focal mechanisms of more than 50 events by first motion records of P wave of ISS. (3) Based on the geological data, tectonic stress field and the result of relocation, inferring focal mechanisms of historical major earthquakes.
How fault geometry controls earthquake magnitude
NASA Astrophysics Data System (ADS)
Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.
2016-12-01
Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.
Radiated Seismic Energy of Earthquakes in the South-Central Region of the Gulf of California, Mexico
NASA Astrophysics Data System (ADS)
Castro, Raúl R.; Mendoza-Camberos, Antonio; Pérez-Vertti, Arturo
2018-05-01
We estimated the radiated seismic energy (ES) of 65 earthquakes located in the south-central region of the Gulf of California. Most of these events occurred along active transform faults that define the Pacific-North America plate boundary and have magnitudes between M3.3 and M5.9. We corrected the spectral records for attenuation using nonparametric S-wave attenuation functions determined with the whole data set. The path effects were isolated from the seismic source using a spectral inversion. We computed radiated seismic energy of the earthquakes by integrating the square velocity source spectrum and estimated their apparent stresses. We found that most events have apparent stress between 3 × 10-4 and 3 MPa. Model independent estimates of the ratio between seismic energy and moment (ES/M0) indicates that this ratio is independent of earthquake size. We conclude that in general the apparent stress is low (σa < 3 MPa) in the south-central and southern Gulf of California.
Increasing seismicity in the U. S. midcontinent: Implications for earthquake hazard
Ellsworth, William L.; Llenos, Andrea L.; McGarr, Arthur F.; Michael, Andrew J.; Rubinstein, Justin L.; Mueller, Charles S.; Petersen, Mark D.; Calais, Eric
2015-01-01
Earthquake activity in parts of the central United States has increased dramatically in recent years. The space-time distribution of the increased seismicity, as well as numerous published case studies, indicates that the increase is of anthropogenic origin, principally driven by injection of wastewater coproduced with oil and gas from tight formations. Enhanced oil recovery and long-term production also contribute to seismicity at a few locations. Preliminary hazard models indicate that areas experiencing the highest rate of earthquakes in 2014 have a short-term (one-year) hazard comparable to or higher than the hazard in the source region of tectonic earthquakes in the New Madrid and Charleston seismic zones.
NASA Astrophysics Data System (ADS)
Olsen, K. B.; Geisselmeyer, A.; Stephenson, W. J.; Mai, P. M.
2007-12-01
The Cascadia subduction zone in the Pacific Northwest, USA, generates Great (megathrust) earthquakes with a recurrence period of about 500 years, most recently the M~9 event on January 26, 1700. Since no earthquake of such magnitude has occurred in the Pacific Northwest since the deployment of strong ground motion instruments, a large uncertainty is associated with the ground motions expected from such event. To decrease this uncertainty, we have carried out the first 3D simulations of megathrust earthquakes (Mw8.5 and Mw9.0) rupturing along the Cascadia subduction zone. The simulations were carried out in a recently developed 3D velocity model of the region of dimensions 1050 km by 550 km, discretized into 2 billion 250 m3 cubes with a minimum S-wave velocity of 625 m/s. The model includes the subduction slab, accretionary sediments, local sedimentary basins, and the ocean layer. About 6 minutes of wave propagation for each scenario consumed about 24 Wall-clock hours using a parallel fourth-order finite-difference method with 1600 processors on the San Diego Supercomputer Center Datastar supercomputer. The source descriptions for the Mw9.0 scenarios were designed by mapping the inversion results for the December 26, 2004 M9+ Sumatra-Andaman Islands earthquake (Ji, 2006) onto a 950 km by 150 km large rupture for the Pacific Northwest model. Simulations were carried out for hypocenters located toward the northern and southern ends of the subduction zone. In addition, we simulated two M8.5 events with a source area of 275 km by 150 km located in the northern and central parts of the model area. The sources for the M8.5 events were generated using the pseudo-dynamic model by Guatteri et al. (2004). All sources used spatially-variable slip, rise time and rupture velocity. Three major metropolitan areas are located in the model region, namely Seattle (3 million+ people), Vancouver (2 million+ people), and Portland (2 million+ people), all located above sedimentary basins amplifying the waves incident from the subduction zone. The estimated peak ground velocities (PGVs) for frequencies less than 0.5 Hz vary significantly with the assumed rise time. Using a mean rise of 32 s, as estimated from source inversion of the 2004 M9+ Sumatra-Andeman event (Ji, 2006), PGVs reached 40 cm/s in Seattle and 10 cm/s in Vancouver and Portland. However, if the mean rise time is decreased to about 14 s, as suggested by the empirical regression by Somerville et al. (1999), PGVs are increased by 2-3 times at these locations. For the Mw8.5 events, PGVs would reach about 10 cm/s in Seattle, and about 5 cm/s in Vancouver and Portland. Combined with extended duration of the shaking exceeding 1 minute for the Mw8.5 events and 2 minutes for the Mw9 events, these long-period ground motions may inflict significant damage on the built environment, in particular on the highrises in downtown Seattle. However, the strongest shaking arrives 1-2 minutes after the earthquake nucleates, indicating that an early warning system in place may help mitigate loss of life in case of a megathrust earthquake in the Pacific Northwest. Additional efforts should analyse the simulated displacements on the ocean bottom for tsunami generation potential.
A recent deep earthquake doublet in light of long-term evolution of Nazca subduction
NASA Astrophysics Data System (ADS)
Zahradník, J.; Čížková, H.; Bina, C. R.; Sokos, E.; Janský, J.; Tavera, H.; Carvalho, J.
2017-03-01
Earthquake faulting at ~600 km depth remains puzzling. Here we present a new kinematic interpretation of two Mw7.6 earthquakes of November 24, 2015. In contrast to teleseismic analysis of this doublet, we use regional seismic data providing robust two-point source models, further validated by regional back-projection and rupture-stop analysis. The doublet represents segmented rupture of a ˜30-year gap in a narrow, deep fault zone, fully consistent with the stress field derived from neighbouring 1976-2015 earthquakes. Seismic observations are interpreted using a geodynamic model of regional subduction, incorporating realistic rheology and major phase transitions, yielding a model slab that is nearly vertical in the deep-earthquake zone but stagnant below 660 km, consistent with tomographic imaging. Geodynamically modelled stresses match the seismically inferred stress field, where the steeply down-dip orientation of compressive stress axes at ˜600 km arises from combined viscous and buoyant forces resisting slab penetration into the lower mantle and deformation associated with slab buckling and stagnation. Observed fault-rupture geometry, demonstrated likelihood of seismic triggering, and high model temperatures in young subducted lithosphere, together favour nanometric crystallisation (and associated grain-boundary sliding) attending high-pressure dehydration as a likely seismogenic mechanism, unless a segment of much older lithosphere is present at depth.
Prediction of the area affected by earthquake-induced landsliding based on seismological parameters
NASA Astrophysics Data System (ADS)
Marc, Odin; Meunier, Patrick; Hovius, Niels
2017-07-01
We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.
Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)
NASA Astrophysics Data System (ADS)
Jordan, T. H.
2010-12-01
The goal of operational earthquake forecasting (OEF) is to provide the public with authoritative information about how seismic hazards are changing with time. During periods of high seismic activity, short-term earthquake forecasts based on empirical statistical models can attain nominal probability gains in excess of 100 relative to the long-term forecasts used in probabilistic seismic hazard analysis (PSHA). Prospective experiments are underway by the Collaboratory for the Study of Earthquake Predictability (CSEP) to evaluate the reliability and skill of these seismicity-based forecasts in a variety of tectonic environments. How such information should be used for civil protection is by no means clear, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing formal procedures for OEF in this sort of “low-probability environment.” Nevertheless, the need to move more quickly towards OEF has been underscored by recent experiences, such as the 2009 L’Aquila earthquake sequence and other seismic crises in which an anxious public has been confused by informal, inconsistent earthquake forecasts. Whether scientists like it or not, rising public expectations for real-time information, accelerated by the use of social media, will require civil protection agencies to develop sources of authoritative information about the short-term earthquake probabilities. In this presentation, I will discuss guidelines for the implementation of OEF informed by my experience on the California Earthquake Prediction Evaluation Council, convened by CalEMA, and the International Commission on Earthquake Forecasting, convened by the Italian government following the L’Aquila disaster. (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and they need to convey the epistemic uncertainties in the operational forecasts. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. All operational procedures should be rigorously reviewed by experts in the creation, delivery, and utility of earthquake forecasts. (c) The quality of all operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing in a CSEP-type environment against established long-term forecasts and a wide variety of alternative, time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in PSHA. (e) Alert procedures should be standardized to facilitate decisions at different levels of government and among the public, based in part on objective analysis of costs and benefits. (f) In establishing alert procedures, consideration should also be made of the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that can lead to informal predictions and misinformation.
Constraining earthquake source inversions with GPS data: 1. Resolution-based removal of artifacts
Page, M.T.; Custodio, S.; Archuleta, R.J.; Carlson, J.M.
2009-01-01
We present a resolution analysis of an inversion of GPS data from the 2004 Mw 6.0 Parkfield earthquake. This earthquake was recorded at thirteen 1-Hz GPS receivers, which provides for a truly coseismic data set that can be used to infer the static slip field. We find that the resolution of our inverted slip model is poor at depth and near the edges of the modeled fault plane that are far from GPS receivers. The spatial heterogeneity of the model resolution in the static field inversion leads to artifacts in poorly resolved areas of the fault plane. These artifacts look qualitatively similar to asperities commonly seen in the final slip models of earthquake source inversions, but in this inversion they are caused by a surplus of free parameters. The location of the artifacts depends on the station geometry and the assumed velocity structure. We demonstrate that a nonuniform gridding of model parameters on the fault can remove these artifacts from the inversion. We generate a nonuniform grid with a grid spacing that matches the local resolution length on the fault and show that it outperforms uniform grids, which either generate spurious structure in poorly resolved regions or lose recoverable information in well-resolved areas of the fault. In a synthetic test, the nonuniform grid correctly averages slip in poorly resolved areas of the fault while recovering small-scale structure near the surface. Finally, we present an inversion of the Parkfield GPS data set on the nonuniform grid and analyze the errors in the final model. Copyright 2009 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Wang, D.; Becker, N. C.; Weinstein, S.; Duputel, Z.; Rivera, L. A.; Hayes, G. P.; Hirshorn, B. F.; Bouchard, R. H.; Mungov, G.
2017-12-01
The Pacific Tsunami Warning Center (PTWC) began forecasting tsunamis in real-time using source parameters derived from real-time Centroid Moment Tensor (CMT) solutions in 2009. Both the USGS and PTWC typically obtain W-Phase CMT solutions for large earthquakes less than 30 minutes after earthquake origin time. Within seconds, and often before waves reach the nearest deep ocean bottom pressure sensor (DARTs), PTWC then generates a regional tsunami propagation forecast using its linear shallow water model. The model is initialized by the sea surface deformation that mimics the seafloor deformation based on Okada's (1985) dislocation model of a rectangular fault with a uniform slip. The fault length and width are empirical functions of the seismic moment. How well did this simple model perform? The DART records provide a very valuable dataset for model validation. We examine tsunami events of the last decade with earthquake magnitudes ranging from 6.5 to 9.0 including some deep events for which tsunamis were not expected. Most of the forecast results were obtained during the events. We also include events from before the implementation of the WCMT method at USGS and PTWC, 2006-2009. For these events, WCMTs were computed retrospectively (Duputel et al. 2012). We also re-ran the model with a larger domain for some events to include far-field DARTs that recorded a tsunami with identical source parameters used during the events. We conclude that our model results, in terms of maximum wave amplitude, are mostly within a factor of two of the observed at DART stations, with an average error of less than 40% for most events, including the 2010 Maule and the 2011 Tohoku tsunamis. However, the simple fault model with a uniform slip is too simplistic for the Tohoku tsunami. We note model results are sensitive to centroid location and depth, especially if the earthquake is close to land or inland. For the 2016 M7.8 New Zealand earthquake the initial forecast underestimated the tsunami because the initial WCMT centroid was on land (the epicenter was inland but most of the slips occurred offshore). Later WCMTs did provide better forecast. The model also failed to reproduce the observed tsunamis from earthquake-generated landslides. Sea level observations during the events are crucial in determining whether or not a forecast needs to be adjusted.
Modeling of earthquake ground motion in the frequency domain
NASA Astrophysics Data System (ADS)
Thrainsson, Hjortur
In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation model is assessed using data from the SMART-1 array in Taiwan. The interpolation model provides an effective method to estimate ground motion at a site using recordings from stations located up to several kilometers away. Reliable estimates of differential ground motion are restricted to relatively limited ranges of frequencies and inter-station spacings.
NASA Astrophysics Data System (ADS)
Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.
2008-12-01
Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the northeastern China/Korean Peninsula region where average plane-layered structure is well known and relatively laterally homogenous. Secondly, we will consider the Middle East where crustal and upper mantle structure is laterally heterogeneous due to recent and ongoing tectonism. If time allows we will investigate the efficacy of each method for retrieving source parameters from synthetic data generated using a three-dimensional model of seismic structure of the Middle East, where phase delays are known to arise from path-dependent structure.
A Tsunami Model for Chile for (Re) Insurance Purposes
NASA Astrophysics Data System (ADS)
Arango, Cristina; Rara, Vaclav; Puncochar, Petr; Trendafiloski, Goran; Ewing, Chris; Podlaha, Adam; Vatvani, Deepak; van Ormondt, Maarten; Chandler, Adrian
2014-05-01
Catastrophe models help (re)insurers to understand the financial implications of catastrophic events such as earthquakes and tsunamis. In earthquake-prone regions such as Chile,(re)insurers need more sophisticated tools to quantify the risks facing their businesses, including models with the ability to estimate secondary losses. The 2010 (M8.8) Maule (Chile) earthquake highlighted the need for quantifying losses from secondary perils such as tsunamis, which can contribute to the overall event losses but are not often modelled. This paper presents some key modelling aspects of a new earthquake catastrophe model for Chile developed by Impact Forecasting in collaboration with Aon Benfield Research partners, focusing on the tsunami component. The model has the capability to model tsunami as a secondary peril - losses due to earthquake (ground-shaking) and induced tsunamis along the Chilean coast are quantified in a probabilistic manner, and also for historical scenarios. The model is implemented in the IF catastrophe modelling platform, ELEMENTS. The probabilistic modelling of earthquake-induced tsunamis uses a stochastic event set that is consistent with the seismic (ground shaking) hazard developed for Chile, representing simulations of earthquake occurrence patterns for the region. Criteria for selecting tsunamigenic events (from the stochastic event set) are proposed which take into consideration earthquake location, depth and the resulting seabed vertical displacement and tsunami inundation depths at the coast. The source modelling software RuptGen by Babeyko (2007) was used to calculate static seabed vertical displacement resulting from earthquake slip. More than 3,600 events were selected for tsunami simulations. Deep and shallow water wave propagation is modelled using the Delft3D modelling suite, which is a state-of-the-art software developed by Deltares. The Delft3D-FLOW module is used in 2-dimensional hydrodynamic simulation settings with non-steady flow. Earthquake-induced static seabed vertical displacement is used as an input boundary condition to the model. The model is hierarchically set up with three nested domain levels; with 250 domains in total covering the entire Chilean coast. Spatial grid-cell resolution is equal to the native SRTM resolution of approximately 90m. In addition to the stochastic events, the 1960 (M9.5) Valdivia and 2010 (M8.8) Maule earthquakes are modelled. The modelled tsunami inundation map for the 2010 Maule event is validated through comparison with real observations. The vulnerability component consists of an extensive damage curves database, including curves for buildings, contents and business interruption for 21 occupancies, 24 structural types and two secondary modifies such as building height and period of construction. The building damage curves are developed by use of load-based method in which the building's capacity to resist tsunami loads is treated as equivalent to the design earthquake load capacity. The contents damage and business interruption curves are developed by use of deductive approach i.e. HAZUS flood vulnerability and business function restoration models are adapted for detailed occupancies and then assigned to the dominant structural types in Chile. The vulnerability component is validated through model overall back testing by use of observed aggregated earthquake and tsunami losses for client portfolios for 2010 Maule earthquake.
Nankai-Tokai subduction hazard for catastrophe risk modeling
NASA Astrophysics Data System (ADS)
Spurr, D. D.
2010-12-01
The historical record of Nankai subduction zone earthquakes includes nine event sequences over the last 1300 years. Typical characteristic behaviour is evident, with segments rupturing either co-seismically or as two large earthquakes less than 3 yrs apart (active phase), followed by periods of low seismicity lasting 90 - 150 yrs or more. Despite the long historical record, the recurrence behaviour and consequent seismic hazard remain uncertain and controversial. In 2005 the Headquarters for Earthquake Research Promotion (HERP) published models for hundreds of faults as part of an official Japanese seismic hazard map. The HERP models have been widely adopted in part or full both within Japan and by the main international catastrophe risk model companies. The time-dependent recurrence modelling we adopt for the Nankai faults departs considerably from HERP in three main areas: ■ A “Linked System” (LS) source model is used to simulate the strong correlation between segment ruptures evident in the historical record, whereas the HERP recurrence estimates assume the Nankai, Tonankai and Tokai segments rupture independently. The LS component models all historical events with a common rupture recurrence cycle for the three segments. System rupture probabilities are calculated assuming BPT behaviour and parameter uncertainties assessed from the full 1300 yr historical record. ■ An independent, “Tokai Only” (TO) rupture source is used specifically to model potential “Tokai only” earthquakes. There are widely diverging views on the possibility of this segment rupturing independently. Although all historical Tokai ruptures appear to have been composite Tonankai -Tokai earthquakes, the available data do not preclude the possibility of future “Tokai only” events. The HERP model also includes “Tokai only” earthquakes but the recurrence parameters are based on historical composite Tonankai -Tokai ruptures and do not appear to recognise the complex tectonic environment in the Tokai area. ■ For the Nankai and Tonankai segments only, HERP assumed Time-Predictable (TP) recurrence behaviour. The resulting calculated 30 and 50 year rupture probabilities are considerably higher than standard renewal model estimates as used in the adopted model. While perhaps more contentious, the weight of evidence available does not appear to be consistent with TP behaviour. For the adopted modelling the estimated probabilities of no Nankai segment rupture within the next 30 & 50 years are 56% & 27% respectively. The disparity between the models is highlighted by the much lower estimates obtained by HERP (2.5% & 0.039% respectively as at 2006). Even for just the Nankai and Tonankai segments (ie. ignoring Tokai), HERP estimated only 1.7% probability of no rupture in 50yrs. These estimates can be contrasted with the fact that in 2056 (50 yrs from 2006), the elapsed time since the start of the last rupture cycle (112yrs) will still be 5 yrs short of the historical mean recurrence interval since 1360. Net effects on nation-wide catastrophe risk estimates for all earthquake sources depend on modelled exposure distributions but can be as much as a factor of two. The differences are important as they impact on multi-billion dollar international risk transfer programs.
Insight into the rupture process of a rare tsunami earthquake from near-field high-rate GPS
NASA Astrophysics Data System (ADS)
Macpherson, K. A.; Hill, E. M.; Elosegui, P.; Banerjee, P.; Sieh, K. E.
2011-12-01
We investigated the rupture duration and velocity of the October 25, 2010 Mentawai earthquake by examining high-rate GPS displacement data. This Mw=7.8 earthquake appears to have ruptured either an up-dip part of the Sumatran megathrust or a fore-arc splay fault, and produced tsunami run-ups on nearby islands that were out of proportion with its magnitude. It has been described as a so-called "slow tsunami earthquake", characterised by a dearth of high-frequency signal and long rupture duration in low-strength, near-surface media. The event was recorded by the Sumatran GPS Array (SuGAr), a network of high-rate (1 sec) GPS sensors located on the nearby islands of the Sumatran fore-arc. For this study, the 1 sec time series from 8 SuGAr stations were selected for analysis due to their proximity to the source and high-quality recordings of both static displacements and dynamic waveforms induced by surface waves. The stations are located at epicentral distances of between 50 and 210 km, providing a unique opportunity to observe the dynamic source processes of a tsunami earthquake from near-source, high-rate GPS. We estimated the rupture duration and velocity by simulating the rupture using the spectral finite-element method SPECFEM and comparing the synthetic time series to the observed surface waves. A slip model from a previous study, derived from the inversion of GPS static offsets and tsunami data, and the CRUST2.0 3D velocity model were used as inputs for the simulations. Rupture duration and velocity were varied for a suite of simulations in order to determine the parameters that produce the best-fitting waveforms.
Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Woo, Gordon
2017-04-01
For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.
Optimization of the Number and Location of Tsunami Stations in a Tsunami Warning System
NASA Astrophysics Data System (ADS)
An, C.; Liu, P. L. F.; Pritchard, M. E.
2014-12-01
Optimizing the number and location of tsunami stations in designing a tsunami warning system is an important and practical problem. It is always desirable to maximize the capability of the data obtained from the stations for constraining the earthquake source parameters, and to minimize the number of stations at the same time. During the 2011 Tohoku tsunami event, 28 coastal gauges and DART buoys in the near-field recorded tsunami waves, providing an opportunity for assessing the effectiveness of those stations in identifying the earthquake source parameters. Assuming a single-plane fault geometry, inversions of tsunami data from combinations of various number (1~28) of stations and locations are conducted and evaluated their effectiveness according to the residues of the inverse method. Results show that the optimized locations of stations depend on the number of stations used. If the stations are optimally located, 2~4 stations are sufficient to constrain the source parameters. Regarding the optimized location, stations must be uniformly spread in all directions, which is not surprising. It is also found that stations within the source region generally give worse constraint of earthquake source than stations farther from source, which is due to the exaggeration of model error in matching large amplitude waves at near-source stations. Quantitative discussions on these findings will be given in the presentation. Applying similar analysis to the Manila Trench based on artificial scenarios of earthquakes and tsunamis, the optimal location of tsunami stations are obtained, which provides guidance of deploying a tsunami warning system in this region.
Regional spectral analysis of three moderate earthquakes in Northeastern North America
Boatwright, John; Seekins, Linda C.
2011-01-01
We analyze Fourier spectra obtained from the horizontal components of broadband and accelerogram data from the 1997 Cap-Rouge, the 2002 Ausable Forks, and the 2005 Rivière-du-Loup earthquakes, recorded by Canadian and American stations sited on rock at hypocentral distances from 23 to 602 km. We check the recorded spectra closely for anomalies that might result from site resonance or source effects. We use Beresnev and Atkinson’s (1997) near-surface velocity structures and Boore and Joyner’s (1997) quarter-wave method to estimate site response at hard- and soft-rock sites. We revise the Street et al. (1975) model for geometrical spreading, adopting a crossover distance of ro=50 km instead of 100 km. We obtain an average attenuation of Q=410±25f0.50±0.03 for S+Lg+surface waves with ray paths in the Appalachian and southeastern Grenville Provinces. We correct the recorded spectra for attenuation and site response to estimate source spectral shape and radiated energy for these three earthquakes and the 1988 M 5.8 Saguenay earthquake. The Brune stress drops range from 130 to 419 bars, and the apparent stresses range from 39 to 63 bars. The corrected source spectral shapes of these earthquakes are somewhat variable for frequencies from 0.2 to 2 Hz, falling slightly below the fitted Brune spectra.
NASA Astrophysics Data System (ADS)
Dutta, Rishabh; Jónsson, Sigurjón; Wang, Teng; Vasyura-Bathke, Hannes
2018-04-01
Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (Mw 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar and Global Positioning System data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the main shock increased stress on the fault and brought it closer to failure.
Energy Partition and Variability of Earthquakes
NASA Astrophysics Data System (ADS)
Kanamori, H.
2003-12-01
During an earthquake the potential energy (strain energy + gravitational energy + rotational energy) is released, and the released potential energy (Δ W) is partitioned into radiated energy (ER), fracture energy (EG), and thermal energy (E H). How Δ W is partitioned into these energies controls the behavior of an earthquake. The merit of the slip-weakening concept is that only ER and EG control the dynamics, and EH can be treated separately to discuss the thermal characteristics of an earthquake. In general, if EG/E_R is small, the event is ``brittle", if EG /ER is large, the event is ``quasi static" or, in more common terms, ``slow earthquakes" or ``creep". If EH is very large, the event may well be called a thermal runaway rather than an earthquake. The difference in energy partition has important implications for the rupture initiation, evolution and excitation of long-period ground motions from very large earthquakes. We review the current state of knowledge on this problem in light of seismological observations and the basic physics of fracture. With seismological methods, we can measure only ER and the lower-bound of Δ W, Δ W0, and estimation of other energies involves many assumptions. ER: Although ER can be directly measured from the radiated waves, its determination is difficult because a large fraction of energy radiated at the source is attenuated during propagation. With the commonly used teleseismic and regional methods, only for events with MW>7 and MW>4, respectively, we can directly measure more than 10% of the total radiated energy. The rest must be estimated after correction for attenuation. Thus, large uncertainties are involved, especially for small earthquakes. Δ W0: To estimate Δ W0, estimation of the source dimension is required. Again, only for large earthquakes, the source dimension can be estimated reliably. With the source dimension, the static stress drop, Δ σ S, and Δ W0, can be estimated. EG: Seismologically, EG is the energy mechanically dissipated during faulting. In the context of the slip-weakening model, EG can be estimated from Δ W0 and ER. Alternatively, EG can be estimated from the laboratory data on the surface energy, the grain size and the total volume of newly formed fault gouge. This method suggests that, for crustal earthquakes, EG/E_R is very small, less than 0.2 even for extreme cases, for earthquakes with MW>7. This is consistent with the EG estimated with seismological methods, and the fast rupture speeds during most large earthquakes. For shallow subduction-zone earthquakes, EG/E_R varies substantially depending on the tectonic environments. EH: Direct estimation of EH is difficult. However, even with modest friction, EH can be very large, enough to melt or even dissociate a significant amount of material near the slip zone for large events with large slip, and the associated thermal effects may have significant effects on fault dynamics. The energy partition varies significantly for different types of earthquakes, e.g. large earthquakes on mature faults, large earthquakes on faults with low slip rates, subduction-zone earthquakes, deep focus earthquakes etc; this variability manifests itself in the difference in the evolution of seismic slip pattern. The different behaviors will be illustrated using the examples for large earthquakes, including, the 2001 Kunlun, the 1998 Balleny Is., the 1994 Bolivia, the 2001 India earthquake, the 1999 Chi-Chi, and the 2002 Denali earthquakes.
Exploiting broadband seismograms and the mechanism of deep-focus earthquakes
NASA Astrophysics Data System (ADS)
Jiao, Wenjie
1997-09-01
Modern broadband seismic instrumentation has provided enormous opportunities to retrieve the information in almost any frequency band of seismic interest. In this thesis, we have investigated the long period responses of the broadband seismometers and the problem of recovering actual groundmotion. For the first time, we recovered the static offset for an earthquake from dynamic seismograms. The very long period waves of near- and intermediate-field term from 1994 large Bolivian deep earthquake (depth = 630km, Msb{W}=8.2) and 1997 large Argentina deep earthquake (depth = 285km, Msb{W}=7.1) are successfully recovered from the portable broadband recordings by BANJO and APVC networks. These waves provide another dynamic window into the seismic source process and may provide unique information to help constrain the source dynamics of deep earthquakes in the future. We have developed a new method to locate global explosion events based on broadband waveform stacking and simulated annealing. This method utilizes the information provided by the full broadband waveforms. Instead of "picking times", the character of the wavelet is used for locating events. The application of this methodology to a Lop Nor nuclear explosion is very successful, and suggests a procedure for automatic monitoring. We have discussed the problem of deep earthquakes from the viewpoint of rock mechanics and seismology. The rupture propagation of deep earthquakes requires a slip-weakening process unlike that for shallow events. However, this process is not necessarily the same as the process which triggers the rupture. Partial melting due to stress release is developed to account for the slip-weakening process in the deep earthquake rupture. The energy required for partial melting in this model is on the same order of the maximum energy required for the slip-weakening process in the shallow earthquake rupture. However, the verification of this model requires experimental work on the thermodynamic properties of rocks under non-hydrostatic stress. The solution of the deep earthquake problem will require an interdisciplinary study of seismology, high pressure rock mechanics, and mineralogy.
Frankel, Arthur D.; Stephenson, William J.; Carver, David L.; Williams, Robert A.; Odum, Jack K.; Rhea, Susan
2007-01-01
This report presents probabilistic seismic hazard maps for Seattle, Washington, based on over 500 3D simulations of ground motions from scenario earthquakes. These maps include 3D sedimentary basin effects and rupture directivity. Nonlinear site response for soft-soil sites of fill and alluvium was also applied in the maps. The report describes the methodology for incorporating source and site dependent amplification factors into a probabilistic seismic hazard calculation. 3D simulations were conducted for the various earthquake sources that can affect Seattle: Seattle fault zone, Cascadia subduction zone, South Whidbey Island fault, and background shallow and deep earthquakes. The maps presented in this document used essentially the same set of faults and distributed-earthquake sources as in the 2002 national seismic hazard maps. The 3D velocity model utilized in the simulations was validated by modeling the amplitudes and waveforms of observed seismograms from five earthquakes in the region, including the 2001 M6.8 Nisqually earthquake. The probabilistic seismic hazard maps presented here depict 1 Hz response spectral accelerations with 10%, 5%, and 2% probabilities of exceedance in 50 years. The maps are based on determinations of seismic hazard for 7236 sites with a spacing of 280 m. The maps show that the most hazardous locations for this frequency band (around 1 Hz) are soft-soil sites (fill and alluvium) within the Seattle basin and along the inferred trace of the frontal fault of the Seattle fault zone. The next highest hazard is typically found for soft-soil sites in the Duwamish Valley south of the Seattle basin. In general, stiff-soil sites in the Seattle basin exhibit higher hazard than stiff-soil sites outside the basin. Sites with shallow bedrock outside the Seattle basin have the lowest estimated hazard for this frequency band.
López-Venegas, A.M.; ten Brink, Uri S.; Geist, Eric L.
2008-01-01
The October 11, 1918 ML 7.5 earthquake in the Mona Passage between Hispaniola and Puerto Rico generated a local tsunami that claimed approximately 100 lives along the western coast of Puerto Rico. The area affected by this tsunami is now significantly more populated. Newly acquired high-resolution bathymetry and seismic reflection lines in the Mona Passage show a fresh submarine landslide 15 km northwest of Rinćon in northwestern Puerto Rico and in the vicinity of the first published earthquake epicenter. The landslide area is approximately 76 km2 and probably displaced a total volume of 10 km3. The landslide's headscarp is at a water depth of 1200 m, with the debris flow extending to a water depth of 4200 m. Submarine telegraph cables were reported cut by a landslide in this area following the earthquake, further suggesting that the landslide was the result of the October 11, 1918 earthquake. On the other hand, the location of the previously suggested source of the 1918 tsunami, a normal fault along the east wall of Mona Rift, does not show recent seafloor rupture. Using the extended, weakly non-linear hydrodynamic equations implemented in the program COULWAVE, we modeled the tsunami as generated by a landslide with a duration of 325 s (corresponding to an average speed of ~ 27 m/s) and with the observed dimensions and location. Calculated marigrams show a leading depression wave followed by a maximum positive amplitude in agreement with the reported polarity, relative amplitudes, and arrival times. Our results suggest this newly-identified landslide, which was likely triggered by the 1918 earthquake, was the primary cause of the October 11, 1918 tsunami and not the earthquake itself. Results from this study should be useful to help discern poorly constrained tsunami sources in other case studies.
NASA Astrophysics Data System (ADS)
Dahm, Torsten; Cesca, Simone; Hainzl, Sebastian; Braun, Thomas; Krüger, Frank
2015-04-01
Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the Mw 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the Mw 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Söhlingen gas field; and (3) the Mw 6.1 Emilia 2012, Northern Italy, earthquake in the vicinity of a hydrocarbon reservoir. The three test cases cover the complete range of possible causes: clearly "human induced," "not even human triggered," and a third case in between both extremes.
Two-dimensional seismic velocity models of southern Taiwan from TAIGER transects
NASA Astrophysics Data System (ADS)
McIntosh, K. D.; Kuochen, H.; Van Avendonk, H. J.; Lavier, L. L.; Wu, F. T.; Okaya, D. A.
2013-12-01
We use a broad combination of wide-angle seismic data sets to develop high-resolution crustal-scale, two-dimensional, velocity models across southern Taiwan and the adjacent Huatung Basin. The data were recorded primarily during the TAIGER project and include records of thousands of marine airgun shots, several land explosive sources, and ~90 Earthquakes. Both airgun sources and earthquake data were recorded by dense land arrays, and ocean bottom seismographs (OBS) recorded airgun sources east of Taiwan. This combination of data sets enables us to develop a high-resolution upper- to mid-crustal model defined by marine and explosive sources, while also constraining the full crustal structure - with depths approaching 50 km - by using the earthquake and explosive sources. These data and the resulting models are particularly important for understanding the development of arc-continent collision in Taiwan. McIntosh et al. (2013) have shown that highly extended continental crust of the northeastern South China Sea rifted margin is underthrust at the Manila trench southwest of Taiwan but then is structurally underplated to the accretionary prism. This process of basement accretion is confirmed in the southern Central Range of Taiwan where basement outcrops can be directly linked to high seismic velocities measured in the accretionary prism well south of the continental shelf, even south of Taiwan. These observations indicate that the southern Central Range begins to grow well before there is any direct interaction between the North Luzon arc and the Eurasian continent. Our transects provide information on how the accreted mass behaves as it approaches the continental shelf and on deformation of the arc and forearc as this occurs. We suggest that arc-continent collision in Taiwan actually develops as arc-prism-continent collision.
Hanson, Stanley L.; Perkins, David M.
1995-01-01
The construction of a probabilistic ground-motion hazard map for a region follows a sequence of analyses beginning with the selection of an earthquake catalog and ending with the mapping of calculated probabilistic ground-motion values (Hanson and others, 1992). An integral part of this process is the creation of sources used for the calculation of earthquake recurrence rates and ground motions. These sources consist of areas and lines that are representative of geologic or tectonic features and faults. After the design of the sources, it is necessary to arrange the coordinate points in a particular order compatible with the input format for the SEISRISK-III program (Bender and Perkins, 1987). Source zones are usually modeled as a point-rupture source. Where applicable, linear rupture sources are modeled with articulated lines, representing known faults, or a field of parallel lines, representing a generalized distribution of hypothetical faults. Based on the distribution of earthquakes throughout the individual source zones (or a collection of several sources), earthquake recurrence rates are computed for each of the sources, and a minimum and maximum magnitude is assigned. Over a period of time from 1978 to 1980 several conferences were held by the USGS to solicit information on regions of the United States for the purpose of creating source zones for computation of probabilistic ground motions (Thenhaus, 1983). As a result of these regional meetings and previous work in the Pacific Northwest, (Perkins and others, 1980), California continental shelf, (Thenhaus and others, 1980), and the Eastern outer continental shelf, (Perkins and others, 1979) a consensus set of source zones was agreed upon and subsequently used to produce a national ground motion hazard map for the United States (Algermissen and others, 1982). In this report and on the accompanying disk we provide a complete list of source areas and line sources as used for the 1982 and later 1990 seismic hazard maps for the conterminous U.S. and Alaska. These source zones are represented in the input form required for the hazard program SEISRISK-III, and they include the attenuation table and several other input parameter lines normally found at the beginning of an input data set for SEISRISK-III.
NASA Astrophysics Data System (ADS)
Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.
2014-12-01
The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source model obtained from the inversion of strong motion data performed by Lee's et al (2011). In this model two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-rate snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large earthquakes may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large earthquakes the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture model to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of up-lift close to the trench, which may be the cause of such a devastating tsunami followed by the Tohoku earthquake. To investigate the impact of such a huge up-lift, we ran tsunami simulations with the slip reactivation model using sam(oa)2 (O. Meister et al., 2012), a state-of-the-art Finite-Volume framework to simulate the resulting tsunami waves.
NASA Astrophysics Data System (ADS)
Heuer, B.; Plenefisch, T.; Seidl, D.; Klinge, K.
Investigations on the interdependence of different source parameters are an impor- tant task to get more insight into the mechanics and dynamics of earthquake rup- ture, to model source processes and to make predictions for ground motion at the surface. The interdependencies, providing so-called scaling relations, have often been investigated for large earthquakes. However, they are not commonly determined for micro-earthquakes and swarm-earthquakes, especially for those of the Vogtland/NW- Bohemia region. For the most recent swarm in the Vogtland/NW-Bohemia, which took place between August and December 2000 near Novy Kostel (Czech Republic), we systematically determine the most important source parameters such as energy E0, seismic moment M0, local magnitude ML, fault length L, corner frequency fc and rise time r and build their interdependencies. The swarm of 2000 is well suited for such investigations since it covers a large magnitude interval (1.5 ML 3.7) and there are also observations in the near-field at several stations. In the present paper we mostly concentrate on two near-field stations with hypocentral distances between 11 and 13 km, namely WERN (Wernitzgrün) and SBG (Schönberg). Our data processing includes restitution to true ground displacement and rotation into the ray-based prin- cipal co-ordinate system, which we determine by the covariance matrix of the P- and S-displacement, respectively. Data preparation, determination of the distinct source parameters as well as statistical interpretation of the results will be exemplary pre- sented. The results will be discussed with respect to temporal variations in the swarm activity (the swarm consists of eight distinct sub-episodes) and already existing focal mechanisms.
NASA Astrophysics Data System (ADS)
Hintersberger, Esther; Decker, Kurt; Lomax, Johanna; Lüthgens, Christopher
2018-02-01
Intraplate regions characterized by low rates of seismicity are challenging for seismic hazard assessment, mainly for two reasons. Firstly, evaluation of historic earthquake catalogues may not reveal all active faults that contribute to regional seismic hazard. Secondly, slip rate determination is limited by sparse geomorphic preservation of slowly moving faults. In the Vienna Basin (Austria), moderate historical seismicity (Imax, obs / Mmax, obs = 8/5.2) concentrates along the left-lateral strike-slip Vienna Basin Transfer Fault (VBTF). In contrast, several normal faults branching out from the VBTF show neither historical nor instrumental earthquake records, although geomorphological data indicate Quaternary displacement along those faults. Here, located about 15 km outside of Vienna, the Austrian capital, we present a palaeoseismological dataset of three trenches that cross one of these splay faults, the Markgrafneusiedl Fault (MF), in order to evaluate its seismic potential. Comparing the observations of the different trenches, we found evidence for five to six surface-breaking earthquakes during the last 120 kyr, with the youngest event occurring at around 14 ka. The derived surface displacements lead to magnitude estimates ranging between 6.2 ± 0.5 and 6.8 ± 0.4. Data can be interpreted by two possible slip models, with slip model 1 showing more regular recurrence intervals of about 20-25 kyr between the earthquakes with M ≥ 6.5 and slip model 2 indicating that such earthquakes cluster in two time intervals in the last 120 kyr. Direct correlation between trenches favours slip model 2 as the more plausible option. Trench observations also show that structural and sedimentological records of strong earthquakes with small surface offset have only low preservation potential. Therefore, the earthquake frequency for magnitudes between 6 and 6.5 cannot be constrained by the trenching records. Vertical slip rates of 0.02-0.05 mm a-1 derived from the trenches compare well to geomorphically derived slip rates of 0.02-0.09 mm a-1. Magnitude estimates from fault dimensions suggest that the largest earthquakes observed in the trenches activated the entire fault surface of the MF including the basal detachment that links the normal fault with the VBTF. The most important implications of these palaeoseismological results for seismic hazard assessment are as follows. (1) The MF is an active seismic source, capable of rupturing the surface despite the lack of historical earthquakes. (2) The MF is kinematically and geologically equivalent to a number of other splay faults of the VBTF. It is reasonable to assume that these faults are potential sources of large earthquakes as well. The frequency of strong earthquakes near Vienna is therefore expected to be significantly higher than the earthquake frequency reconstructed for the MF alone. (3) Although rare events, the potential for earthquake magnitudes equal or greater than M = 7.0 in the Vienna Basin should be considered in seismic hazard studies.
NASA Astrophysics Data System (ADS)
Jones, Kenneth B., II
2015-04-01
Many attempts have been made to determine an earthquake forecasting method and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic wave model, various hypotheses were formed, but only two seemed to take shape with the most interesting one requiring a magnetometer of a unique design. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, results have had wide variability and problems still reside with what exactly is forecastable and the investigative direction of a true precursor. After a number of custom rock experiments, the two hypotheses were thoroughly tested to correlate the EM wave model. The first hypothesis involved sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio wave generation. The second hypothesis resulted best with highly reproducible data, radio wave generation and detection, and worked numerous times with each laboratory test administered. In addition, internally introduced force on a small scale stressed a number of select rock types to emit radio waves well before catastrophic failure, and failure always went to completion. Comparatively, at a larger scale, highly detailed studies were procured to establish legitimate wave guides from potential hypocenters to epicenters and map the results, accordingly. Field testing in Southern California from 2006 to 2011 and outside the NE Texas town of Timpson in February, 2013 was conducted for detecting similar, laboratory generated, radio wave sources. At the Southern California field sites, signals were detected in numerous directions with varying amplitudes; therefore, a reactive approach was investigated in hopes of detecting possible aftershocks from large, tectonically related M5.0+ earthquakes. At the Timpson, Texas field sites, a proactive detection approach was taken, due to the heavy presence of hydraulic fracturing activity for regional hydrocarbon extraction, which appeared to be causing several rare M4.0+ earthquakes. As a result, detailed Southern California and Timpson, Texas field studies led to the improved design of two newer, prototype antennae and the first ever earthquake epicenter map. With more antennae and continuous monitoring, more fracture cycles can be established well ahead of the next earthquake. In addition, field data could be ascertained longer by the proper authorities and lead to significantly improved earthquake forecasting. The EM precursor determined by this method appears to surpass all prior precursor claims, and the general public may finally receive long overdue forecasting.
NASA Astrophysics Data System (ADS)
Shen, X. H.; Zhang, X.; Liu, J.; Zhao, S. F.; Yuan, G. P.
2015-04-01
Ionospheric perturbations in plasma parameters have been observed before large earthquakes, but the correlation between different parameters has been less studied in previous research. The present study is focused on the relationship between electron density (Ne) and temperature (Te) observed by the DEMETER (Detection of Electro-Magnetic Emissions Transmitted from Earthquake Regions) satellite during local nighttime, in which a positive correlation has been revealed near the equator and a weak correlation at mid- and low latitudes over both hemispheres. Based on this normal background analysis, the negative correlation with the lowest percent in all Ne and Te points is studied before and after large earthquakes at mid- and low latitudes. The multiparameter observations exhibited typical synchronous disturbances before the Chile M8.8 earthquake in 2010 and the Pu'er M6.4 in 2007, and Te varied inversely with Ne over the epicentral areas. Moreover, statistical analysis has been done by selecting the orbits at a distance of 1000 km and ±7 days before and after the global earthquakes. Enhanced negative correlation coefficients lower than -0.5 between Ne and Te are found in 42% of points to be connected with earthquakes. The correlation median values at different seismic levels show a clear decrease with earthquakes larger than 7. Finally, the electric-field-coupling model is discussed; furthermore, a digital simulation has been carried out by SAMI2 (Sami2 is Another Model of the Ionosphere), which illustrates that the external electric field in the ionosphere can strengthen the negative correlation in Ne and Te at a lower latitude relative to the disturbed source due to the effects of the geomagnetic field. Although seismic activity is not the only source to cause the inverse Ne-Te variations, the present results demonstrate one possibly useful tool in seismo-electromagnetic anomaly differentiation, and a comprehensive analysis with multiple parameters helps to further understand the seismo-ionospheric coupling mechanism.
NASA Astrophysics Data System (ADS)
Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.
2013-12-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
NASA Astrophysics Data System (ADS)
Munafo, I.; Malagnini, L.; Tinti, E.; Chiaraluce, L.; Di Stefano, R.; Valoroso, L.
2014-12-01
The Alto Tiberina Fault (ATF) is a 60 km long east-dipping low-angle normal fault, located in a sector of the Northern Apennines (Italy) undergoing active extension since the Quaternary. The ATF has been imaged by analyzing the active source seismic reflection profiles, and the instrumentally recorded persistent background seismicity. The present study is an attempt to separate the contributions of source, site, and crustal attenuation, in order to focus on the mechanics of the seismic sources on the ATF, as well on the synthetic and the antithetic structures within the ATF hanging-wall (i.e. Colfiorito fault, Gubbio fault and Umbria Valley fault). In order to compute source spectra, we perform a set of regressions over the seismograms of 2000 small earthquakes (-0.8 < ML< 4) recorded between 2010 and 2014 at 50 permanent seismic stations deployed in the framework of the Alto Tiberina Near Fault Observatory project (TABOO) and equipped with three-components seismometers, three of which located in shallow boreholes. Because we deal with some very small earthquakes, we maximize the signal to noise ratio (SNR) with a technique based on the analysis of peak values of bandpass-filtered time histories, in addition to the same processing performed on Fourier amplitudes. We rely on a tool called Random Vibration Theory (RVT) to completely switch from peak values in the time domain to Fourier spectral amplitudes. Low-frequency spectral plateau of the source terms are used to compute moment magnitudes (Mw) of all the events, whereas a source spectral ratio technique is used to estimate the corner frequencies (Brune spectral model) of a subset of events chosen over the analysis of the noise affecting the spectral ratios. So far, the described approach provides high accuracy over the spectral parameters of earthquakes of localized seismicity, and may be used to gain insights into the underlying mechanics of faulting and the earthquake processes.
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.
2017-12-01
Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications. In this poster, we summarize the key components of the UCVM framework and describe the impact it has had in various computational geoscientific applications.
Uchida, Naoki; Matsuzawa, Toru; Ellsworth, William L.; Imanishi, Kazutoshi; Shimamura, Kouhei; Hasegawa, Akira
2012-01-01
We have estimated the source parameters of interplate earthquakes in an earthquake cluster off Kamaishi, NE Japan over two cycles of M~ 4.9 repeating earthquakes. The M~ 4.9 earthquake sequence is composed of nine events that occurred since 1957 which have a strong periodicity (5.5 ± 0.7 yr) and constant size (M4.9 ± 0.2), probably due to stable sliding around the source area (asperity). Using P- and S-wave traveltime differentials estimated from waveform cross-spectra, three M~ 4.9 main shocks and 50 accompanying microearthquakes (M1.5–3.6) from 1995 to 2008 were precisely relocated. The source sizes, stress drops and slip amounts for earthquakes of M2.4 or larger were also estimated from corner frequencies and seismic moments using simultaneous inversion of stacked spectral ratios. Relocation using the double-difference method shows that the slip area of the 2008 M~ 4.9 main shock is co-located with those of the 1995 and 2001 M~ 4.9 main shocks. Four groups of microearthquake clusters are located in and around the mainshock slip areas. Of these, two clusters are located at the deeper and shallower edge of the slip areas and most of these microearthquakes occurred repeatedly in the interseismic period. Two other clusters located near the centre of the mainshock source areas are not as active as the clusters near the edge. The occurrence of these earthquakes is limited to the latter half of the earthquake cycles of the M~ 4.9 main shock. Similar spatial and temporal features of microearthquake occurrence were seen for two other cycles before the 1995 M5.0 and 1990 M5.0 main shocks based on group identification by waveform similarities. Stress drops of microearthquakes are 3–11 MPa and are relatively constant within each group during the two earthquake cycles. The 2001 and 2008 M~ 4.9 earthquakes have larger stress drops of 41 and 27 MPa, respectively. These results show that the stress drop is probably determined by the fault properties and does not change much for earthquakes rupturing in the same area. The occurrence of microearthquakes in the interseismic period suggests the intrusion of aseismic slip, causing a loading of these patches. We also found that some earthquakes near the centre of the mainshock source area occurred just after the earthquakes at the deeper edge of the mainshock source area. These seismic activities probably indicate episodic aseismic slip migrating from the deeper regions in the mainshock asperity to its centre during interseismic periods. Comparison of the source parameters for the 2001 and 2008 main shocks shows that the seismic moments (1.04 x 1016 Nm and 1.12 x 1016 Nm for the 2008 and 2001 earthquakes, respectively) and source sizes (radius = 570 m and 540 m for the 2008 and 2001 earthquakes, respectively) are comparable. Based on careful phase identification and hypocentre relocation by constraining the hypocentres of other small earthquakes to their precisely located centroids, we found that the hypocentres of the 2001 and 2008 M~ 4.9 events are located in the southeastern part of the mainshock source area. This location does not correspond to either episodic slip area or hypocentres of small earthquakes that occurred during the earthquake cycle.
Probabilistic Risk Analysis of Run-up and Inundation in Hawaii due to Distant Tsunamis
NASA Astrophysics Data System (ADS)
Gica, E.; Teng, M. H.; Liu, P. L.
2004-12-01
Risk assessment of natural hazards usually includes two aspects, namely, the probability of the natural hazard occurrence and the degree of damage caused by the natural hazard. Our current study is focused on the first aspect, i.e., the development and evaluation of a methodology that can predict the probability of coastal inundation due to distant tsunamis in the Pacific Basin. The calculation of the probability of tsunami inundation could be a simple statistical problem if a sufficiently long record of field data on inundation was available. Unfortunately, such field data are very limited in the Pacific Basin due to the reason that field measurement of inundation requires the physical presence of surveyors on site. In some areas, no field measurements were ever conducted in the past. Fortunately, there are more complete and reliable historical data on earthquakes in the Pacific Basin partly because earthquakes can be measured remotely. There are also numerical simulation models such as the Cornell COMCOT model that can predict tsunami generation by an earthquake, propagation in the open ocean, and inundation onto a coastal land. Our objective is to develop a methodology that can link the probability of earthquakes in the Pacific Basin with the inundation probability in a coastal area. The probabilistic methodology applied here involves the following steps: first, the Pacific Rim is divided into blocks of potential earthquake sources based on the past earthquake record and fault information. Then the COMCOT model is used to predict the inundation at a distant coastal area due to a tsunami generated by an earthquake of a particular magnitude in each source block. This simulation generates a response relationship between the coastal inundation and an earthquake of a particular magnitude and location. Since the earthquake statistics is known for each block, by summing the probability of all earthquakes in the Pacific Rim, the probability of the inundation in a coastal area can be determined through the response relationship. Although the idea of the statistical methodology applied here is not new, this study is the first to apply it to study the probability of inundation caused by earthquake-generated distant tsunamis in the Pacific Basin. As a case study, the methodology is applied to predict the tsunami inundation risk in Hilo Bay in Hawaii. Since relatively more field data on tsunami inundation are available for Hilo Bay, this case study can help to evaluate the applicability of the methodology for predicting tsunami inundation risk in the Pacific Basin. Detailed results will be presented at the AGU meeting.
New Insights on Tsunami Genesis and Energy Source
NASA Astrophysics Data System (ADS)
Song, Y. T.; Mohtat, A.; Yim, S. C.
2017-12-01
Conventional tsunami theories suggest that earthquakes with significant vertical motions are more likely to generate tsunamis. In tsunami models, the vertical seafloor elevation is directly transferred to the sea-surface as the only initial condition. However, evidence from the 2011 Tohoku earthquake indicates otherwise; the vertical seafloor uplift was only 3 5 meters, too small to account for the resultant tsunami. Surprisingly, the horizontal displacement was undeniably larger than anyone's expectation; about 60 meters at the frontal wedge of the fault plate, the largest slip ever recorded by in-situ instruments. The question is whether the horizontal motion of seafloor slopes had enhanced the tsunami to become as destructive as observed. In this study, we provide proof: (1) Combining various measurements from the 2011 Tohoku event, we show that the earthquake transferred a total energy of 3.1e+15 joule to the ocean, in which the potential energy (PE) due to the vertical seafloor elevation (including seafloor uplift/subsidence plus the contribution from the horizontal displacement) was less than a half, while the kinetic energy (KE) due to the horizontal displacement velocity of the continental slope contributed a majority portion; (2) Using two modern state-of-the-art wave flumes and a three-dimensional tsunami model, we have reproduced the source energy and tsunamis consistent with observations, including the 2004 Sumatra event. Based on the unified source energy formulation, we offer a competing theory to explain why some earthquakes generate destructive tsunamis, while others do not.
Intraplate earthquakes and the state of stress in oceanic lithosphere
NASA Technical Reports Server (NTRS)
Bergman, Eric A.
1986-01-01
The dominant sources of stress relieved in oceanic intraplate earthquakes are investigated to examine the usefulness of earthquakes as indicators of stress orientation. The primary data for this investigation are the detailed source studies of 58 of the largest of these events, performed with a body-waveform inversion technique of Nabelek (1984). The relationship between the earthquakes and the intraplate stress fields was investigated by studying, the rate of seismic moment release as a function of age, the source mechanisms and tectonic associations of larger events, and the depth-dependence of various source parameters. The results indicate that the earthquake focal mechanisms are empirically reliable indicators of stress, probably reflecting the fact that an earthquake will occur most readily on a fault plane oriented in such a way that the resolved shear stress is maximized while the normal stress across the fault, is minimized.
NASA Astrophysics Data System (ADS)
Minson, S. E.; Brooks, B. A.; Murray, J. R.; Iannucci, R. A.
2013-12-01
We explore the efficacy of low-cost community instruments (LCCIs) and crowd-sourcing to produce rapid estimates of earthquake magnitude and rupture characteristics which can be used for earthquake loss reduction such as issuing tsunami warnings and guiding rapid response efforts. Real-time high-rate GPS data are just beginning to be incorporated into earthquake early warning (EEW) systems. These data are showing promising utility including producing moment magnitude estimates which do not saturate for the largest earthquakes and determining the geometry and slip distribution of the earthquake rupture in real-time. However, building a network of scientific-quality real-time high-rate GPS stations requires substantial infrastructure investment which is not practicable in many parts of the world. To expand the benefits of real-time geodetic monitoring globally, we consider the potential of pseudorange-based GPS locations such as the real-time positioning done onboard cell phones or on LCCIs that could be distributed in the same way accelerometers are distributed as part of the Quake Catcher Network (QCN). While location information from LCCIs often have large uncertainties, their low cost means that large numbers of instruments can be deployed. A monitoring network that includes smartphones could collect data from potentially millions of instruments. These observations could be averaged together to substantially decrease errors associated with estimated earthquake source parameters. While these data will be inferior to data recorded by scientific-grade seismometers and GPS instruments, there are features of community-based data collection (and possibly analysis) that are very attractive. This approach creates a system where every user can host an instrument or download an application to their smartphone that both provides them with earthquake and tsunami warnings while also providing the data on which the warning system operates. This symbiosis helps to encourage people to both become users of the warning system and to contribute data to the system. Further, there is some potential to take advantage of the LCCI hosts' computing and communications resources to do some of the analysis required for the warning system. We will present examples of the type of data which might be observed by pseudorange-based positioning for both actual earthquakes and laboratory tests as well as performance tests of potential earthquake source modeling derived from pseudorange data. A highlight of these performance tests is a case study of the 2011 Mw 9 Tohoku-oki earthquake.
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Setiyono, U.; Satake, K.; Fujii, Y.
2017-12-01
We built pre-computed tsunami inundation database in Pelabuhan Ratu, one of tsunami-prone areas on the southern coast of Java, Indonesia. The tsunami database can be employed for a rapid estimation of tsunami inundation during an event. The pre-computed tsunami waveforms and inundations are from a total of 340 scenarios ranging from 7.5 to 9.2 in moment magnitude scale (Mw), including simple fault models of 208 thrust faults and 44 tsunami earthquakes on the plate interface, as well as 44 normal faults and 44 reverse faults in the outer-rise region. Using our tsunami inundation forecasting algorithm (NearTIF), we could rapidly estimate the tsunami inundation in Pelabuhan Ratu for three different hypothetical earthquakes. The first hypothetical earthquake is a megathrust earthquake type (Mw 9.0) offshore Sumatra which is about 600 km from Pelabuhan Ratu to represent a worst-case event in the far-field. The second hypothetical earthquake (Mw 8.5) is based on a slip deficit rate estimation from geodetic measurements and represents a most likely large event near Pelabuhan Ratu. The third hypothetical earthquake is a tsunami earthquake type (Mw 8.1) which often occur south off Java. We compared the tsunami inundation maps produced by the NearTIF algorithm with results of direct forward inundation modeling for the hypothetical earthquakes. The tsunami inundation maps produced from both methods are similar for the three cases. However, the tsunami inundation map from the inundation database can be obtained in much shorter time (1 min) than the one from a forward inundation modeling (40 min). These indicate that the NearTIF algorithm based on pre-computed inundation database is reliable and useful for tsunami warning purposes. This study also demonstrates that the NearTIF algorithm can work well even though the earthquake source is located outside the area of fault model database because it uses a time shifting procedure for the best-fit scenario searching.
SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R E; Mayeda, K; Walter, W R
2008-07-08
The objectives of this study are to improve low-magnitude (concentrating on M2.5-5) regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge at small magnitudes (i.e., m{sub b}more » < {approx} 4.0) is poorly resolved, and source scaling remains a subject of on-going debate in the earthquake seismology community. Recently there have been a number of empirical studies suggesting scaling of micro-earthquakes is non-self-similar, yet there are an equal number of compelling studies that would suggest otherwise. It is not clear whether different studies obtain different results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods that both make use of empirical Green's function (EGF) earthquakes to remove path effects. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But finding well recorded earthquakes with 'perfect' EGF events for direct wave analysis is difficult, limits the number of earthquakes that can be studied. We begin with closely-located, well-correlated earthquakes. We use a multi-taper method to obtain time-domain source-time-functions by frequency division. We only accept an earthquake and EGF pair if they are able to produce a clear, time-domain source pulse. We fit the spectral ratios and perform a grid-search about the preferred parameters to ensure the fits are well constrained. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We analyze three clusters of aftershocks from the well-recorded sequence following the M5 Au Sable Forks, NY, earthquake to obtain some of the first accurate source parameters for small earthquakes in eastern North America. Each cluster contains a M{approx}2, and two contain M{approx}3, as well as smaller aftershocks. We find that the corner frequencies and stress drops are high (averaging 100 MPa) confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We also demonstrate that a scaling breakdown suggested by earlier work is simply an artifact of their more band-limited data. We calculate radiated energy, and find that the ratio of Energy to seismic Moment is also high, around 10{sup -4}. We estimate source parameters for the M5 mainshock using similar methods, but our results are more doubtful because we do not have a EGF event that meets our preferred criteria. The stress drop and energy/moment ratio for the mainshock are slightly higher than for the aftershocks. Our improved, and simplified coda wave analysis method uses spectral ratios (as for the direct waves) but relies on the averaging nature of the coda waves to use EGF events that do not meet the strict criteria of similarity required for the direct wave analysis. We have applied the coda wave spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks, and also to several sequences in Italy with M{approx}6 mainshocks. The Italian earthquakes have higher stress drops than the Hector Mine sequence, but lower than Au Sable Forks. These results show a departure from self-similarity, consistent with previous studies using similar regional datasets. The larger earthquakes have higher stress drops and energy/moment ratios. We perform a preliminary comparison of the two methods using the M5 Au Sable Forks earthquake. Both methods give very consistent results, and we are applying the comparison to further events.« less
Earthquake scaling laws for rupture geometry and slip heterogeneity
NASA Astrophysics Data System (ADS)
Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro
2016-04-01
We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip distributions. To further characterize the spatial correlations of slip heterogeneity, we analyze the power spectral decay of slip applying the 2-D von Karman auto-correlation function (parameterized by the Hurst exponent, H, and correlation lengths along strike and down-slip). The Hurst exponent is scale invariant, H = 0.83 (± 0.12), while the correlation lengths scale with source dimensions (seismic moment), thus implying characteristic physical scales of earthquake ruptures. Our self-consistent scaling relationships allow constraining the generation of slip-heterogeneity scenarios for physics-based ground-motion and tsunami simulations.
NASA Astrophysics Data System (ADS)
Elliott, A. J.; Walker, R. T.; Parsons, B.; Ren, Z.; Ainscoe, E. A.; Abdrakhmatov, K.; Mackenzie, D.; Arrowsmith, R.; Gruetzner, C.
2016-12-01
In regions of the planet with long historical records, known past seismic events can be attributed to specific fault sources through the identification and measurement of single-event scarps in high-resolution imagery and topography. The level of detail captured by modern remote sensing is now sufficient to map and measure complete earthquake ruptures that were originally only sparsely mapped or overlooked entirely. We can thus extend the record of mapped earthquake surface ruptures into the preinstrumental period and capture the wealth of information preserved in the numerous historical earthquake ruptures throughout regions like Central Asia. We investigate two major late 19th and early 20th century earthquakes that are well located macroseismically but whose fault sources had proved enigmatic in the absence of detailed imagery and topography. We use high-resolution topographic models derived from photogrammetry of satellite, low-altitude, and ground-based optical imagery to map and measure the coseismic scarps of the 1889 M8.3 Chilik, Kazakhstan and 1932 M7.6 Changma, China earthquakes. Measurement of the scarps on the combined imagery and topography reveals the extent and slip distribution of coseismic rupture in each of these events, showing both earthquakes involved multiple faults with variable kinematics. We use a 1-m elevation model of the Changma fault derived from Pleiades satellite imagery to map the changing kinematics of the 1932 rupture along strike. For the 1889 Chilik earthquake we use 1.5-m SPOT-6 satellite imagery to produce a regional elevation model of the fault ruptures, from which we identify three distinct, intersecting fault systems that each have >20 km of fresh, single-event scarps. Along sections of each of these faults we construct high resolution (330 points per sq m) elevation models using quadcopter- and helikite-mounted cameras. From the detailed topography we measure single-event oblique offsets of 6-10 m, consistent with the large inferred magnitude of the 1889 Chilik event. High resolution, photogrammetric topography offers a low-cost, effective way to thoroughly map rupture traces and measure coseismic displacements for past fault ruptures, extending our record of coseismic displacements into a past rich with formerly sparsely documented ruptures.
NASA Astrophysics Data System (ADS)
van Wagoner, T. M.; Crosson, R. S.; Creager, K. C.; Medema, G.; Preston, L.; Symons, N. P.; Brocher, T. M.
2002-12-01
The availability of regional earthquake data from the Pacific Northwest Seismograph Network (PNSN), together with active source data from the Seismic Hazards Investigation in Puget Sound (SHIPS) seismic experiments, has allowed us to construct a new high-resolution 3-D, P wave velocity model of the crust to a depth of about 30 km in the central Puget Lowland. In our method, earthquake hypocenters and velocity model are jointly coupled in a fully nonlinear tomographic inversion. Active source data constrain the upper 10-15 km of the model, and earthquakes constrain the deepest portion of the model. A number of sedimentary basins are imaged, including the previously unrecognized Muckleshoot basin, and the previously incompletely defined Possession and Sequim basins. Various features of the shallow crust are imaged in detail and their structural transitions to the mid and lower crust are revealed. These include the Tacoma basin and fault zone, the Seattle basin and fault zone, the Seattle and Port Ludlow velocity highs, the Port Townsend basin, the Kingston Arch, and the Crescent basement, which is arched beneath the Lowland from its surface exposure in the eastern Olympics. Strong lateral velocity gradients, consistent with the existence of previously inferred faults, are observed, bounding the southern Port Townsend basin, the western edge of the Seattle basin beneath Dabob Bay, and portions of the Port Ludlow velocity high and the Tacoma basin. Significant velocity gradients are not observed across the southern Whidbey Island fault, the Lofall fault, or along most of the inferred location of the Hood Canal fault. Using improved earthquake locations resulting from our inversion, we determined focal mechanisms for a number of the best recorded earthquakes in the data set, revealing a complex pattern of deformation dominated by general arc-parallel regional tectonic compression. Most earthquakes occur in the basement rocks inferred to be the lower Tertiary Crescent formation. The sedimentary basins and the eastern part of the Olympic subduction complex are largely devoid of earthquakes. Clear association of hypocenters and focal mechanisms with previously mapped or proposed faults is difficult; however, seismicity, structure, and focal mechanisms associated with the Seattle fault zone suggest a possible high-angle mode of deformation with the north side up. We suggest that this deformation may be driven by isostatic readjustment of the Seattle basin.
Van Wagoner, T. M.; Crosson, R.S.; Creager, K.C.; Medema, G.; Preston, L.; Symons, N.P.; Brocher, T.M.
2002-01-01
The availability of regional earthquake data from the Pacific Northwest Seismograph Network (PNSN), together with active source data from the Seismic Hazards Investigation in Puget Sound (SHIPS) seismic experiments, has allowed us to construct a new high-resolution 3-D, P wave velocity model of the crust to a depth of about 30 km in the central Puget Lowland. In our method, earthquake hypocenters and velocity model are jointly coupled in a fully nonlinear tomographic inversion. Active source data constrain the upper 10-15 km of the model, and earthquakes constrain the deepest portion of the model. A number of sedimentary basins are imaged, including the previously unrecognized Muckleshoot basin, and the previously incompletely defined Possession and Sequim basins. Various features of the shallow crust are imaged in detail and their structural transitions to the mid and lower crust are revealed. These include the Tacoma basin and fault zone, the Seattle basin and fault zone, the Seattle and Port Ludlow velocity highs, the Port Townsend basin, the Kingston Arch, and the Crescent basement, which is arched beneath the Lowland from its surface exposure in the eastern Olympics. Strong lateral velocity gradients, consistent with the existence of previously inferred faults, are observed, bounding the southern Port Townsend basin, the western edge of the Seattle basin beneath Dabob Bay, and portions of the Port Ludlow velocity high and the Tacoma basin. Significant velocity gradients are not observed across the southern Whidbey Island fault, the Lofall fault, or along most of the inferred location of the Hood Canal fault. Using improved earthquake locations resulting from our inversion, we determined focal mechanisms for a number of the best recorded earthquakes in the data set, revealing a complex pattern of deformation dominated by general arc-parallel regional tectonic compression. Most earthquakes occur in the basement rocks inferred to be the lower Tertiary Crescent formation. The sedimentary basins and the eastern part of the Olympic subduction complex are largely devoid of earthquakes. Clear association of hypocenters and focal mechanisms with previously mapped or proposed faults is difficult; however, seismicity, structure, and focal mechanisms associated with the Seattle fault zone suggest a possible high-angle mode of deformation with the north side up. We suggest that this deformation may be driven by isostatic readjustment of the Seattle basin.
Earthquake source tensor inversion with the gCAP method and 3D Green's functions
NASA Astrophysics Data System (ADS)
Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.
2013-12-01
We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
NASA Astrophysics Data System (ADS)
Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.
2018-06-01
Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections. Although, the scaling can be improved further with the integration of large dataset of microearthquakes and use of a stable and robust approach.
Rapid Large Earthquake and Run-up Characterization in Quasi Real Time
NASA Astrophysics Data System (ADS)
Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.
2017-12-01
Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.
OpenQuake, a platform for collaborative seismic hazard and risk assessment
NASA Astrophysics Data System (ADS)
Henshaw, Paul; Burton, Christopher; Butler, Lars; Crowley, Helen; Danciu, Laurentiu; Nastasi, Matteo; Monelli, Damiano; Pagani, Marco; Panzeri, Luigi; Simionato, Michele; Silva, Vitor; Vallarelli, Giuseppe; Weatherill, Graeme; Wyss, Ben
2013-04-01
Sharing of data and risk information, best practices, and approaches across the globe is key to assessing risk more effectively. Through global projects, open-source IT development and collaborations with more than 10 regions, leading experts are collaboratively developing unique global datasets, best practice, tools and models for global seismic hazard and risk assessment, within the context of the Global Earthquake Model (GEM). Guided by the needs and experiences of governments, companies and international organisations, all contributions are being integrated into OpenQuake: a web-based platform that - together with other resources - will become accessible in 2014. With OpenQuake, stakeholders worldwide will be able to calculate, visualize and investigate earthquake hazard and risk, capture new data and share findings for joint learning. The platform is envisaged as a collaborative hub for earthquake risk assessment, used at global and local scales, around which an active network of users has formed. OpenQuake will comprise both online and offline tools, many of which can also be used independently. One of the first steps in OpenQuake development was the creation of open-source software for advanced seismic hazard and risk calculations at any scale, the OpenQuake Engine. Although in continuous development, a command-line version of the software is already being test-driven and used by hundreds worldwide; from non-profits in Central Asia, seismologists in sub-Saharan Africa and companies in South Asia to the European seismic hazard harmonization programme (SHARE). In addition, several technical trainings were organized with scientists from different regions of the world (sub-Saharan Africa, Central Asia, Asia-Pacific) to introduce the engine and other OpenQuake tools to the community, something that will continue to happen over the coming years. Other tools that are being developed of direct interest to the hazard community are: • OpenQuake Modeller; fundamental instruments for the creation of seismogenic input models for seismic hazard assessment, a critical input to the OpenQuake Engine. OpenQuake Modeller will consist of a suite of tools (Hazard Modellers Toolkit) for characterizing the seismogenic sources of earthquakes and their models of earthquakes recurrence. An earthquake catalogue homogenization tool, for integration, statistical comparison and user-defined harmonization of multiple catalogues of earthquakes is also included in the OpenQuake modeling tools. • A data capture tool for active faults; a tool that allows geologists to draw (new) fault discoveries on a map in an intuitive GIS-environment and add details on the fault through the tool. This data, once quality checked, can then be integrated with the global active faults database, which will increase in value with every new fault insertion. Building on many ongoing efforts and the knowledge of scientists worldwide, GEM will for the first time integrate state-of-the-art data, models, results and open-source tools into a single platform. The platform will continue to increase in value, in particular for use in local contexts, through contributions from and collaborations with scientists and organisations worldwide. This presentation will showcase the OpenQuake Platform, focusing on the IT solutions that have been adopted as well as the added value that the platform will bring to scientists worldwide.
Rapid determination of the energy magnitude Me
NASA Astrophysics Data System (ADS)
di Giacomo, D.; Parolai, S.; Bormann, P.; Saul, J.; Grosser, H.; Wang, R.; Zschau, J.
2009-04-01
The magnitude of an earthquake is one of the most used parameters to evaluate the earthquake's damage potential. However, many magnitude scales developed over the past years have different meanings. Among the non-saturating magnitude scales, the energy magnitude Me is related to a well defined physical parameter of the seismic source, that is the radiated seismic energy ES (e.g. Bormann et al., 2002): Me = 2/3(log10 ES - 4.4). Me is more suitable than the moment magnitude Mw in describing an earthquake's shaking potential (Choy and Kirby, 2004). Indeed, Me is calculated over a wide frequency range of the source spectrum and represents a better measure of the shaking potential, whereas Mw is related to the low-frequency asymptote of the source spectrum and is a good measure of the fault size and hence of the static (tectonic) effect of an earthquake. The calculation of ES requires the integration over frequency of the squared P-waves velocity spectrum corrected for the energy loss experienced by the seismic waves along the path from the source to the receivers. To accout for the frequency-dependent energy loss, we computed spectral amplitude decay functions for different frequenciesby using synthetic Green's functions (Wang, 1999) based on the reference Earth model AK135Q (Kennett et al., 1995; Montagner and Kennett, 1996). By means of these functions the correction for the various propagation effects of the recorded P-wave velocity spectra is performed in a rapid and robust way, and the calculation of ES, and hence of Me, can be computed at the single station. We analyse teleseismic broadband P-waves signals in the distance range 20°-98°. We show that our procedure is suitable for implementation in rapid response systems since it could provide stable Me determinations within 10-15 minutes after the earthquake's origin time. Indeed, we use time variable cumulative energy windows starting 4 s after the first P-wave arrival in order to include the earthquake rupture duration, which is calculated according to Bormann and Saul (2008). We tested our procedure for a large dataset composed by about 750 earthquakes globally distributed in the Mw range 5.5-9.3 recorded at the broadband stations managed by the IRIS, GEOFON, and GEOSCOPE global networks, as well as other regional seismic networks. Me and Mw express two different aspects of the seismic source, and a combined use of these two magnitude scales would allow a better assessment of the tsunami and shaking potential of an earthquake.. References Bormann, P., Baumbach, M., Bock, G., Grosser, H., Choy, G. L., and Boatwright, J. (2002). Seismic sources and source parameters, in IASPEI New Manual of Seismological Observatory Practice, P. Bormann (Editor), Vol. 1, GeoForschungsZentrum, Potsdam, Chapter 3, 1-94. Bormann, P., and Saul, J. (2008). The new IASPEI standard broadband magnitude mB. Seism. Res. Lett., 79(5), 699-705. Choy, G. L., and Kirby, S. (2004). Apparent stress, fault maturity and seismic hazard for normal-fault earthquakes at subduction zones. Geophys. J. Int., 159, 991-1012. Kennett, B. L. N., Engdahl, E. R., and Buland, R. (1995). Constraints on seismic velocities in the Earth from traveltimes. Geophys. J. Int., 122, 108-124. Montagner, J.-P., and Kennett, B. L. N. (1996). How to reconcile body-wave and normal-mode reference Earth models?. Geophys. J. Int., 125, 229-248. Wang, R. (1999). A simple orthonormalization method for stable and efficient computation of Green's functions. Bull. Seism. Soc. Am., 89(3), 733-741.
NASA Astrophysics Data System (ADS)
De Novellis, V.; Carlino, S.; Castaldo, R.; Tramelli, A.; De Luca, C.; Pino, N. A.; Pepe, S.; Convertito, V.; Zinno, I.; De Martino, P.; Bonano, M.; Giudicepietro, F.; Casu, F.; Macedonio, G.; Manunta, M.; Cardaci, C.; Manzo, M.; Di Bucci, D.; Solaro, G.; Zeni, G.; Lanari, R.; Bianco, F.; Tizzani, P.
2018-03-01
The causative source of the first damaging earthquake instrumentally recorded in the Island of Ischia, occurred on 21 August 2017, has been studied through a multiparametric geophysical approach. In order to investigate the source geometry and kinematics we exploit seismological, Global Positioning System, and Sentinel-1 and COSMO-SkyMed differential interferometric synthetic aperture radar coseismic measurements. Our results indicate that the retrieved solutions from the geodetic data modeling and the seismological data are plausible; in particular, the best fit solution consists of an E-W striking, south dipping normal fault, with its center located at a depth of 800 m. Moreover, the retrieved causative fault is consistent with the rheological stratification of the crust in this zone. This study allows us to improve the knowledge of the volcano-tectonic processes occurring on the Island, which is crucial for a better assessment of the seismic risk in the area.
Shallow seismicity in volcanic system: what role does the edifice play?
NASA Astrophysics Data System (ADS)
Bean, Chris; Lokmer, Ivan
2017-04-01
Seismicity in the upper two kilometres in volcanic systems is complex and very diverse in nature. The origins lie in the multi-physics nature of source processes and in the often extreme heterogeneity in near surface structure, which introduces strong seismic wave propagation path effects that often 'hide' the source itself. Other complicating factors are that we are often in the seismic near-field so waveforms can be intrinsically more complex than in far-field earthquake seismology. The traditional focus for an explanation of the diverse nature of shallow seismic signals is to call on the direct action of fluids in the system. Fits to model data are then used to elucidate properties of the plumbing system. Here we show that solutions based on these conceptual models are not unique and that models based on a diverse range of quasi-brittle failure of low stiffness near surface structures are equally valid from a data fit perspective. These earthquake-like sources also explain aspects of edifice deformation that are as yet poorly quantified.
NASA Astrophysics Data System (ADS)
Holden, C.; Kaneko, Y.; D'Anastasio, E.; Benites, R.; Fry, B.; Hamling, I. J.
2017-11-01
The 2016 Kaikōura (New Zealand) earthquake generated large ground motions and resulted in multiple onshore and offshore fault ruptures, a profusion of triggered landslides, and a regional tsunami. Here we examine the rupture evolution using two kinematic modeling techniques based on analysis of local strong-motion and high-rate GPS data. Our kinematic models capture a complex pattern of slowly (Vr < 2 km/s) propagating rupture from south to north, with over half of the moment release occurring in the northern source region, mostly on the Kekerengu fault, 60 s after the origin time. Both models indicate rupture reactivation on the Kekerengu fault with the time separation of 11 s between the start of the original failure and start of the subsequent one. We further conclude that most near-source waveforms can be explained by slip on the crustal faults, with little (<8%) or no contribution from the subduction interface.
Coseismic gravitational potential energy changes induced by global earthquakes during 1976 to 2016
NASA Astrophysics Data System (ADS)
Xu, C.; Chao, B. F.
2017-12-01
We compute the coseismic change in the gravitational potential energy Eg using the spherical-Earth elastic dislocation theory and either the fault model treated as a point source or the finite fault model. The rate of the accumulative coseismic Eg loss produced by historical earthquakes from 1976 to 2016 (about 4, 2000 events) using the GCMT catalogue are estimated to be on the order of -2.1×1020 J/a, or -6.7 TW (1 TW = 1012 watt), amounting to 15% in the total terrestrial heat flow. The energy loss is dominated by the thrust-faulting, especially the mega-thrust earthquakes such as the 2004 Sumatra earthquake (Mw 9.0) and the 2011 Tohoku-Oki earthquake (Mw 9.1). It's notable that the very deep-focus earthquakes, the 1994 Bolivia earthquake (Mw 8.2) and the 2013 Okhotsk earthquake (Mw 8.3), produced significant overall coseismic Eg gain according to our calculation. The accumulative coseismic Eg is mainly released in the mantle with a decrease tendency, and the core of the Earth also lost the coseismic Eg but with a relatively smaller magnitude. By contrast, the crust of the Earth gains Eg cumulatively because of the coseismic deformations. We further investigate the tectonic signature in these coseismic crustal gravitational potential energy changes in the complex tectonic zone, such as Taiwan region and the northeastern margin of Tibetan Plateau.
Source Mechanism and Near-field Characteristics of the 2011 Tohoku-oki Tsunami
NASA Astrophysics Data System (ADS)
Yamazaki, Y.; Cheung, K.; Lay, T.
2011-12-01
The Tohoku-oki great earthquake ruptured the megathrust fault offshore of Miyagi and Fukushima in Northeast Honshu with moment magnitude of Mw 9.0 on March 11, 2011, and generated strong shaking across the region. The resulting tsunami devastated the northeastern Japan coasts and damaged coastal infrastructure across the Pacific. The extensive global seismic networks, dense geodetic instruments, well-positioned buoys and wave gauges, and comprehensive runup records along the northeast Japan coasts provide datasets of unprecedented quality and coverage for investigation of the tsunami source mechanism and near-field wave characteristics. Our finite-source model reconstructs detailed source rupture processes by inversion of teleseismic P waves recorded around the globe. The finite-source solution is validated through comparison with the static displacements recoded at the ARIA (JPL-GSI) GPS stations and models obtained by inversion of high-rate GPS observations. The rupture model has two primary slip regions, near the hypocenter and along the trench; the maximum slip is about 60 m near the trench. Together with the low rupture velocity, the Tohoku-oki event has characteristics in common with tsunami earthquakes, although it ruptured across the entire megathrust. Superposition of the deformation of the subfaults from the planar fault model according to their rupture initiation and rise times specifies the seafloor vertical displacement and velocity for tsunami modeling. We reconstruct the 2011 Tohoku-oki tsunami from the time histories of the seafloor deformation using the dispersive long-wave model NEOWAVE (Non-hydrostatic Evolution of Ocean WAVEs). The computed results are compared with data from six GPS gauges and three wave gauges near the source at 120~200-m and 50-m water depth, as well as DART buoys positioned across the Pacific. The shock-capturing model reproduces near-shore tsunami bores and the runup data gathered by the 2011 Tohoku Earthquake Tsunami Joint Survey Group. Spectral analysis of the computed surface elevation reveals a series of resonance modes and areas prone to tsunami hazards. This case study improves our understanding of near-field tsunami waves and validates the modeling capability to predict their impacts for hazard mitigation and emergency management.
NASA Astrophysics Data System (ADS)
Williams, J. R.; Hawthorne, J.; Rost, S.; Wright, T. J.
2017-12-01
Earthquakes on oceanic transform faults often show unusual behaviour. They tend to occur in swarms, have large numbers of foreshocks, and have high stress drops. We estimate stress drops for approximately 60 M > 4 earthquakes along the Blanco oceanic transform fault, a right-lateral fault separating the Juan de Fuca and Pacific plates offshore of Oregon. We find stress drops with a median of 4.4±19.3MPa and examine how they vary with earthquake moment. We calculate stress drops using a recently developed method based on inter-station phase coherence. We compare seismic records of co-located earthquakes at a range of stations. At each station, we apply an empirical Green's function (eGf) approach to remove phase path effects and isolate the relative apparent source time functions. The apparent source time functions at each earthquake should vary among stations at periods shorter than a P wave's travel time across the earthquake rupture area. Therefore we compute the rupture length of the larger earthquake by identifying the frequency at which the relative apparent source time functions start to vary among stations, leading to low inter-station phase coherence. We determine a stress drop from the rupture length and moment of the larger earthquake. Our initial stress drop estimates increase with increasing moment, suggesting that earthquakes on the Blanco fault are not self-similar. However, these stress drops may be biased by several factors, including depth phases, trace alignment, and source co-location. We find that the inclusion of depth phases (such as pP) in the analysis time window has a negligible effect on the phase coherence of our relative apparent source time functions. We find that trace alignment must be accurate to within 0.05 s to allow us to identify variations in the apparent source time functions at periods relevant for M > 4 earthquakes. We check that the alignments are accurate enough by comparing P wave arrival times across groups of earthquakes. Finally, we note that the eGf path effect removal will be unsuccessful if earthquakes are too far apart. We therefore calculate relative earthquake locations from our estimated differential P wave arrival times, then we examine how our stress drop estimates vary with inter-earthquake distance.
NASA Astrophysics Data System (ADS)
Wang, R.; Gu, Y. J.; Schultz, R.; Kim, A.; Chen, Y.
2015-12-01
During the past four years, the number of earthquakes with magnitudes greater than three has substantially increased in the southern section of Western Canada Sedimentary Basin (WCSB). While some of these events are likely associated with tectonic forces, especially along the foothills of the Canadian Rockies, a significant fraction occurred in previously quiescent regions and has been linked to waste water disposal or hydraulic fracturing. A proper assessment of the origin and source properties of these 'induced earthquakes' requires careful analyses and modeling of regional broadband data, which steadily improved during the past 8 years due to recent establishments of regional broadband seismic networks such as CRANE, RAVEN and TD. Several earthquakes, especially those close to fracking activities (e.g. Fox creek town, Alberta) are analyzed. Our preliminary full moment tensor inversion results show maximum horizontal compressional orientations (P-axis) along the northeast-southwest orientation, which agree with the regional stress directions from borehole breakout data and the P-axis of historical events. The decomposition of those moment tensors shows evidence of strike-slip mechanism with near vertical fault plane solutions, which are comparable to the focal mechanisms of injection induced earthquakes in Oklahoma. Minimal isotropic components have been observed, while a modest percentage of compensated-linear-vector-dipole (CLVD) components, which have been linked to fluid migraition, may be required to match the waveforms. To further evaluate the non-double-couple components, we compare the outcomes of full, deviatoric and pure double couple (DC) inversions using multiple frequency ranges and phases. Improved location and depth information from a novel grid search greatly assists the identification and classification of earthquakes in potential connection with fluid injection or extraction. Overall, a systematic comparison of the source attributes of intermediate-sized earthquakes present a new window into the nature of potentially induced earthquakes in the WCSB.
The cause of larger local magnitude (Mj) in western Japan
NASA Astrophysics Data System (ADS)
Kawamoto, H.; Furumura, T.
2017-12-01
The local magnitude of the Japan Meteorological Agency (JMA) scale (Mj) in Japan sometimes show a significant discrepancy between Mw. The Mj is calculated using the amplitude of the horizontal component of ground displacement recorded by seismometers with the natural period of T0=5 s using Katsumata et al. (2004). A typical example of such a discrepancy in estimating Mj was an overestimation of the 2000 Western Tottori earthquake (Mj=7.3, Mw=6.7; hereafter referred to as event T). In this study, we examined the discrepancy between Mj and Mw for recent large earthquakes occurring in Japan.We found that the most earthquakes with larger Mj (>Mw) occur in western Japan while the earthquakes in northern Japan show reasonable Mj (=Mw). To understand the cause of such larger Mj for western Japan earthquakes we examined the strong motion record from the K-NET and KiK-net network for the event T and other earthquakes for reference. The observed ground displacement record from the event T shows a distinctive Love wave packet in tangential motion with a dominant period of about T=5 s which propagates long distances without showing strong dispersions. On the other hand, the ground motions from the earthquakes in northeastern Japan do not have such surface wave packet, and attenuation of ground motion is significant. Therefore, the overestimation of the Mj for earthquakes in western Japan may be attributed to efficient generation and propagation properties of Love wave probably relating to the crustal structure of western Japan. To explain this, we then conducted a numerical simulation of seismic wave propagation using 3D sedimentary layer model (JIVSM; Koketsu et al., 2012) and the source model of the event T. The result demonstrated the efficient generation of Love wave from the shallow strike-slip source which propagates long distances in western Japan without significant dispersions. On the other hand, the generation of surface wave was not so efficient when using a sedimentary layer model of northeastern Japan. In this case, the attenuation of surface wave is very significant due to the dispersion and scattering as propagating through sedimentary basins. Therefore, overestimation of the Mj for earthquakes in western Japan strongly relates to the structure of western Japan to generate distinctive Love wave packet for long distances.
Earthquake source properties from instrumented laboratory stick-slip
Kilgore, Brian D.; McGarr, Arthur F.; Beeler, Nicholas M.; Lockner, David A.; Thomas, Marion Y.; Mitchell, Thomas M.; Bhat, Harsha S.
2017-01-01
Stick-slip experiments were performed to determine the influence of the testing apparatus on source properties, develop methods to relate stick-slip to natural earthquakes and examine the hypothesis of McGarr [2012] that the product of stiffness, k, and slip duration, Δt, is scale-independent and the same order as for earthquakes. The experiments use the double-direct shear geometry, Sierra White granite at 2 MPa normal stress and a remote slip rate of 0.2 µm/sec. To determine apparatus effects, disc springs were added to the loading column to vary k. Duration, slip, slip rate, and stress drop decrease with increasing k, consistent with a spring-block slider model. However, neither for the data nor model is kΔt constant; this results from varying stiffness at fixed scale.In contrast, additional analysis of laboratory stick-slip studies from a range of standard testing apparatuses is consistent with McGarr's hypothesis. kΔt is scale-independent, similar to that of earthquakes, equivalent to the ratio of static stress drop to average slip velocity, and similar to the ratio of shear modulus to wavespeed of rock. These properties result from conducting experiments over a range of sample sizes, using rock samples with the same elastic properties as the Earth, and scale-independent design practices.
NASA Astrophysics Data System (ADS)
Wong, N. Z.; Feng, L.; Hill, E.
2017-12-01
The Sumatran plate boundary has experienced five Mw > 8 great earthquakes, a handful of Mw 7-8 earthquakes and numerous small to moderate events since the 2004 Mw 9.2 Sumatra-Andaman earthquake. The geodetic studies of these moderate earthquakes have mostly been passed over in favour of larger events. We therefore in this study present a catalog of coseismic uniform-slip models of one Mw 7.2 earthquake and 17 Mw 5.9-6.9 events that have mostly gone geodetically unstudied. These events occurred close to various continuous stations within the Sumatran GPS Array (SuGAr), allowing the network to record their surface deformation. However, due to their relatively small magnitudes, most of these moderate earthquakes were recorded by only 1-4 GPS stations. With the limited observations per event, we first constrain most of the model parameters (e.g. location, slip, patch size, strike, dip, rake) using various external sources (e.g., the ANSS catalog, gCMT, Slab1.0, and empirical relationships). We then use grid-search forward models to explore a range of some of these parameters (geographic position for all events and additionally depth for some events). Our results indicate the gCMT centroid locations in the Sumatran subduction zone might be biased towards the west for smaller events, while ANSS epicentres might be biased towards the east. The more accurate locations of these events are potentially useful in understanding the nature of various structures along the megathrust, particularly the persistent rupture barriers.
NASA Astrophysics Data System (ADS)
Yi, Lei; Xu, Caijun; Wen, Yangmao; Zhang, Xu; Jiang, Guoyan
2018-01-01
The 2016 Ecuador earthquake ruptured the Ecuador-Colombia subduction interface where several historic megathrust earthquakes had occurred. In order to determine a detailed rupture model, Interferometric Synthetic Aperture Radar (InSAR) images and teleseismic data sets were objectively weighted by using a modified Akaika's Bayesian Information Criterion (ABIC) method to jointly invert for the rupture process of the earthquake. In modeling the rupture process, a constrained waveform length method, unlike the traditional subjective selected waveform length method, was used since the lengths of inverted waveforms were strictly constrained by the rupture velocity and rise time (the slip duration time). The optimal rupture velocity and rise time of the earthquake were estimated from grid search, which were determined to be 2.0 km/s and 20 s, respectively. The inverted model shows that the event is dominated by thrust movement and the released moment is 5.75 × 1020 Nm (Mw 7.77). The slip distribution extends southward along the Ecuador coast line in an elongated stripe at a depth between 10 and 25 km. The slip model is composed of two asperities and slipped over 4 m. The source time function is approximate 80 s that separated into two segments corresponding to the two asperities. The small magnitude of the slip occurred in the updip section of the fault plane resulted in small tsunami waves that were verified by observations near the coast. We suggest a possible situation that the rupture zone of the 2016 earthquake is likely not overlapped with that of the 1942 earthquake.
Wald, D.J.; Earle, P.S.; Allen, T.I.; Jaiswal, K.; Porter, K.; Hearne, M.
2008-01-01
The Prompt Assessment of Global Earthquakes for Response (PAGER) System plays a primary alerting role for global earthquake disasters as part of the U.S. Geological Survey’s (USGS) response protocol. We provide an overview of the PAGER system, both of its current capabilities and our ongoing research and development. PAGER monitors the USGS’s near real-time U.S. and global earthquake origins and automatically identifies events that are of societal importance, well in advance of ground-truth or news accounts. Current PAGER notifications and Web pages estimate the population exposed to each seismic intensity level. In addition to being a useful indicator of potential impact, PAGER’s intensity/exposure display provides a new standard in the dissemination of rapid earthquake information. We are currently developing and testing a more comprehensive alert system that will include casualty estimates. This is motivated by the idea that an estimated range of possible number of deaths will aid in decisions regarding humanitarian response. Underlying the PAGER exposure and loss models are global earthquake ShakeMap shaking estimates, constrained as quickly as possible by finite-fault modeling and observed ground motions and intensities, when available. Loss modeling is being developed comprehensively with a suite of candidate models that range from fully empirical to largely analytical approaches. Which of these models is most appropriate for use in a particular earthquake depends on how much is known about local building stocks and their vulnerabilities. A first-order country-specific global building inventory has been developed, as have corresponding vulnerability functions. For calibrating PAGER loss models, we have systematically generated an Atlas of 5,000 ShakeMaps for significant global earthquakes during the last 36 years. For many of these, auxiliary earthquake source and shaking intensity data are also available. Refinements to the loss models are ongoing. Fundamental to such an alert system, we are also developing computational and communications infrastructure for rapid and robust operations and worldwide notifications. PAGER’s methodologies and datasets are being developed in an open environment to support other loss estimation efforts and provide avenues for outside collaboration and critique.
Geophysics: The size and duration of the Sumatra-Andaman earthquake from far-field static offsets
Banerjee, P.; Pollitz, F.F.; Burgmann, R.
2005-01-01
The 26 December 2004 Sumatra earthquake produced static offsets at continuously operating GPS stations at distances of up to 4500 kilometers from the epicenter. We used these displacements to model the earthquake and include consideration of the Earth's shape and depth-varying rigidity. The results imply that the average slip was >5 meters along the full length of the rupture, including the ???650-kilometer-long Andaman segment. Comparison of the source derived from the far-field static offsets with seismically derived estimates suggests that 25 to 35% of the total moment release occurred at periods greater than 1 hour. Taking into consideration the strong dip dependence of moment estimates, the magnitude of the earthquake did not exceed Mw = 9.2.
Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region
NASA Astrophysics Data System (ADS)
Chaudhary, Chhavi; Sharma, Mukat Lal
2017-12-01
Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.
Neo-Deterministic Seismic Hazard Assessment at Watts Bar Nuclear Power Plant Site, Tennessee, USA
NASA Astrophysics Data System (ADS)
Brandmayr, E.; Cameron, C.; Vaccari, F.; Fasan, M.; Romanelli, F.; Magrin, A.; Vlahovic, G.
2017-12-01
Watts Bar Nuclear Power Plant (WBNPP) is located within the Eastern Tennessee Seismic Zone (ETSZ), the second most naturally active seismic zone in the US east of the Rocky Mountains. The largest instrumental earthquakes in the ETSZ are M 4.6, although paleoseismic evidence supports events of M≥6.5. Events are mainly strike-slip and occur on steeply dipping planes at an average depth of 13 km. In this work, we apply the neo-deterministic seismic hazard assessment to estimate the potential seismic input at the plant site, which has been recently targeted by the Nuclear Regulatory Commission for a seismic hazard reevaluation. First, we perform a parametric test on some seismic source characteristics (i.e. distance, depth, strike, dip and rake) using a one-dimensional regional bedrock model to define the most conservative scenario earthquakes. Then, for the selected scenario earthquakes, the estimate of the ground motion input at WBNPP is refined using a two-dimensional local structural model (based on the plant's operator documentation) with topography, thus looking for site amplification and different possible rupture processes at the source. WBNNP features a safe shutdown earthquake (SSE) design with PGA of 0.18 g and maximum spectral amplification (SA, 5% damped) of 0.46 g (at periods between 0.15 and 0.5 s). Our results suggest that, although for most of the considered scenarios the PGA is relatively low, SSE values can be reached and exceeded in the case of the most conservative scenario earthquakes.
Detecting Tsunami Source Energy and Scales from GNSS & Laboratory Experiments
NASA Astrophysics Data System (ADS)
Song, Y. T.; Yim, S. C.; Mohtat, A.
2016-12-01
Historically, tsunami warnings based on the earthquake magnitude have not been very accurate. According to the 2006 U.S. Government Accountability Office report, an unacceptable 75% false alarm rate has prevailed in the Pacific Ocean (GAO-06-519). One of the main reasons for those inaccurate warnings is that an earthquake's magnitude is not the scale or power of the resulting tsunami. For the last 10 years, we have been developing both theories and algorithms to detect tsunami source energy and scales, instead of earthquake magnitudes per se, directly from real-time Global Navigation Satellite System (GNSS) stations along coastlines for early warnings [Song 2007; Song et al., 2008; Song et al., 2012; Xu and Song 2013; Titov et al, 2016]. Here we will report recent progress on two fronts: 1) Examples of using GNSS in detecting the tsunami energy scales for the 2004 Sumatra M9.1 earthquake, the 2005 Nias M8.7 earthquake, the 2010 M8.8 Chilean earthquake, the 2011 M9.0 Tohoku-Oki earthquake, and the 2015 M8.3 Illapel earthquake. 2) New results from recent state-of-the-art wave-maker experiments and comparisons with GNSS data will also be presented. Related reference: Titov, V., Y. T. Song, L. Tang, E. N. Bernard, Y. Bar-Sever, and Y. Wei (2016), Consistent estimates of tsunami energy show promise for improved early warning, Pur Appl. Geophs., DOI: 10.1007/s00024-016-1312-1. Xu, Z. and Y. T. Song (2013), Combining the all-source Green's functions and the GPS-derived source for fast tsunami prediction - illustrated by the March 2011 Japan tsunami, J. Atmos. Oceanic Tech., jtechD1200201. Song, Y. T., I. Fukumori, C. K. Shum, and Y. Yi (2012), Merging tsunamis of the 2011 Tohoku-Oki earthquake detected over the open ocean, Geophys. Res. Lett., doi:10.1029/2011GL050767. Song, Y. T., L.-L. Fu, V. Zlotnicki, C. Ji, V. Hjorleifsdottir, C.K. Shum, and Y. Yi, 2008: The role of horizontal impulses of the faulting continental slope in generating the 26 December 2004 Tsunami (2007), Ocean Modelling, doi:10.1016/j.ocemod.2007.10.007. Song, Y. T. (2007) Detecting tsunami genesis and scales directly from coastal GPS stations, Geophys. Res. Lett., 34, L19602, doi:10.1029/2007GL031681.
NASA Astrophysics Data System (ADS)
Lange, Dietrich; Ruiz, Javier; Carrasco, Sebastián; Manríquez, Paula
2018-04-01
On 2016 December 25, an Mw 7.6 earthquake broke a portion of the Southern Chilean subduction zone south of Chiloé Island, located in the central part of the Mw 9.5 1960 Valdivia earthquake. This region is characterized by repeated earthquakes in 1960 and historical times with very sparse interseismic activity due to the subduction of a young (˜15 Ma), and therefore hot, oceanic plate. We estimate the coseismic slip distribution based on a kinematic finite-fault source model, and through joint inversion of teleseismic body waves and strong motion data. The coseismic slip model yields a total seismic moment of 3.94 × 1020 N.m that occurred over ˜30 s, with the rupture propagating mainly downdip, reaching a peak slip of ˜4.2 m. Regional moment tensor inversion of stronger aftershocks reveals thrust type faulting at depths of the plate interface. The fore- and aftershock seismicity is mostly related to the subduction interface with sparse seismicity in the overriding crust. The 2016 Chiloé event broke a region with increased locking and most likely broke an asperity of the 1960 earthquake. The updip limit of the main event, aftershocks, foreshocks and interseismic activity are spatially similar, located ˜15 km offshore and parallel to Chiloé Islands west coast. The coseismic slip model of the 2016 Chiloé earthquake suggests a peak slip of 4.2 m that locally exceeds the 3.38 m slip deficit that has accumulated since 1960. Therefore, the 2016 Chiloé earthquake possibly released strain that has built up prior to the 1960 Valdivia earthquake.
Laboratory generated M -6 earthquakes
McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.
2014-01-01
We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.
Analysis of post-earthquake landslide activity and geo-environmental effects
NASA Astrophysics Data System (ADS)
Tang, Chenxiao; van Westen, Cees; Jetten, Victor
2014-05-01
Large earthquakes can cause huge losses to human society, due to ground shaking, fault rupture and due to the high density of co-seismic landslides that can be triggered in mountainous areas. In areas that have been affected by such large earthquakes, the threat of landslides continues also after the earthquake, as the co-seismic landslides may be reactivated by high intensity rainfall events. Earthquakes create Huge amount of landslide materials remain on the slopes, leading to a high frequency of landslides and debris flows after earthquakes which threaten lives and create great difficulties in post-seismic reconstruction in the earthquake-hit regions. Without critical information such as the frequency and magnitude of landslides after a major earthquake, reconstruction planning and hazard mitigation works appear to be difficult. The area hit by Mw 7.9 Wenchuan earthquake in 2008, Sichuan province, China, shows some typical examples of bad reconstruction planning due to lack of information: huge debris flows destroyed several re-constructed settlements. This research aim to analyze the decay in post-seismic landslide activity in areas that have been hit by a major earthquake. The areas hit by the 2008 Wenchuan earthquake will be taken a study area. The study will analyze the factors that control post-earthquake landslide activity through the quantification of the landslide volume changes well as through numerical simulation of their initiation process, to obtain a better understanding of the potential threat of post-earthquake landslide as a basis for mitigation planning. The research will make use of high-resolution stereo satellite images, UAV and Terrestrial Laser Scanning(TLS) to obtain multi-temporal DEM to monitor the change of loose sediments and post-seismic landslide activities. A debris flow initiation model that incorporates the volume of source materials, vegetation re-growth, and intensity-duration of the triggering precipitation, and that evaluates different initiation mechanisms such as erosion and landslide reactivation will be developed. The developed initiation model will be integrated with run-out model to simulate the dynamic process of post-earthquake debris flows in the study area for a future period and make a prediction about the decay of landslide activity in future.
NASA Astrophysics Data System (ADS)
Suzuki, K.; Kamiya, S.; Takahashi, N.
2016-12-01
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) installed DONET (Dense Oceanfloor Network System for Earthquakes and Tsunamis) off the Kii Peninsula, southwest of Japan, to monitor earthquakes and tsunamis. Stations of DONET1, which are distributed in Kumano-nada, and DONET2, which are distributed off Muroto, were installed by August 2011 and April 2016, respectively. After the installation of all of the 51 stations, DONET was transferred to National Research Institute for Earth Science and Disaster Resilience (NIED). NIED and JAMSTEC have now corroborated in the operation of DONET since April 2016. To investigate the seismicity around the source areas of the 1946 Nankai and the 1944 Tonankai earthquakes, we detected earthquakes from the records of the broadband seismometers installed to DONET. Because DONET stations are apart from land stations, we can detect smaller earthquakes than by using only land stations. It is important for understanding the stress state and seismogenic mechanism to monitoring the spatial-temporal seismicity change. In this study we purpose to evaluate to the seismicity around the source areas of the Nankai and the Tonankai earthquakes by using our earthquake catalogue. The frequency-magnitude relationships of earthquakes in the areas of DONET1&2 had an almost constant slope of about -1 for earthquakes of ML larger than 1.5 and 2.5, satisfying the Gutenberg-Richter law, and the slope of smaller earthquakes approached 0, reflecting the detection limits. While the most of the earthquakes occurred in the aftershock area of the 2004 off the Kii Peninsula earthquakes, very limited activity was detected in the source region of the Nankai and Tonankai earthquake except for the large earthquake (MJMA = 6.5) on 1st April 2016 and its aftershocks. We will evaluate the detection limit of the earthquake in more detail and investigate the spatial-temporal seismicity change with waiting the data store.
NASA Astrophysics Data System (ADS)
Yolsal-Cevikbilen, Seda; Karaoglu, Özgür; Taymaz, Tuncay; Helvaci, Cahit
2013-04-01
The mechanical behavior of the continental lithosphere for the Aegean region is one of the foremost interesting geological disputes in earth sciences. The Aegean region provides complex tectonic events which produced a strong heterogeneity in the crust (i.e. large thrusts and exhumation shear zones or extensional detachments) as such in among most continental regions. In order to investigate mechanical reasons of the ongoing lithospheric-scale extension within the region, we must tackle all of the existing kinematic and dynamic agents: (1) roll back of the subduction slab and back arc extension; (2) westward extrusion of the Anatolian micro-plate; (3) block rotations of the Aegean region and western Anatolia; and (4) transtensional transform faults. Furthermore, seismological studies, particularly earthquake source mechanisms and rupture modeling, play important roles on deciphering the ongoing deformation and seismotectonic characteristics of the region. Recently, many moderate earthquakes occurred in the Gulfs of Gökova, Kuşadası, Sıǧacık and surroundings. In the present study, we examined source mechanisms and rupture histories of those earthquakes with Mw > 5.0 in order to retrieve the geometry of active faulting, source characteristics, kinematic and dynamic source parameters and current deformations of the region by using teleseismic body-waveform inversion of long-period P- and SH-waves, and broad-band P-waveforms recorded by GDSN and FDSN stations. We also checked first motion polarities of P- waveforms recorded at regional and teleseismic stations and applied several uncertainty tests to find the error limits of minimum misfit solutions. Inversion results revealed E-W directed normal faulting mechanisms with small amount of left lateral strike slip components in the Gulf of Gökova and NE-SW oriented right lateral strike slip faulting mechanisms in the Gulf of Sıǧacık. Earthquakes mostly have N-S and NW-SE directed T- axes directions which are consistent with the geology and seismotectonic structures of the region. Further, the major and well-known earthquake-induced Eastern Mediterranean tsunamis (e.g., 365, 1222, 1303, 1481, 1494, 1822 and 1948) were numerically simulated and several hypothetical tsunami scenarios were proposed to demonstrate the characteristics of tsunami waves, propagations and effects of coastal topography. For simulation of tsunami generation, we used nonlinear shallow-water mathematical models (i.e., TUNAMI-N2, AVI-NAMI and NAMI DANCE) with a given GEBCO - BODC bathymetry data. Synthetic tsunami wave amplitudes were calculated by proposing several hypothetical tsunami scenarios for historical tsunamigenic earthquakes occurred along the Hellenic Subduction Zone and Dodecanese Islands. Illustrative examples depicting the characteristics of tsunami wave propagation, and effects of coastal topography and of near-shore amplification were also given. Finally, potential tsunami risk in future along SW Anatolian coasts that will be related to destructive earthquakes (M > 7.0) occurred along the Hellenic subduction zone and near the deep Rhodes-Dalaman Trough is clearly verified.
Source effects on the simulation of the strong groud motion of the 2011 Lorca earthquake
NASA Astrophysics Data System (ADS)
Saraò, Angela; Moratto, Luca; Vuan, Alessandro; Mucciarelli, Marco; Jimenez, Maria Jose; Garcia Fernandez, Mariano
2016-04-01
On May 11, 2011 a moderate seismic event (Mw=5.2) struck the city of Lorca (South-East Spain) causing nine casualties, a large number of injured people and damages at the civil buildings. The largest PGA value (360 cm/s2) ever recorded so far in Spain, was observed at the accelerometric station located in Lorca (LOR), and it was explained as due to the source directivity, rather than to local site effects. During the last years different source models, retrieved from the inversions of geodetic or seismological data, or a combination of the two, have been published. To investigate the variability that equivalent source models of an average earthquake can introduce in the computation of strong motion, we calculated seismograms (up to 1 Hz), using an approach based on the wavenumber integration and, as input, four different source models taken from the literature. The source models differ mainly for the slip distribution on the fault. Our results show that, as effect of the different sources, the ground motion variability, in terms of pseudo-spectral velocity (1s), can reach one order of magnitude for near source receivers or for sites influenced by the forward-directivity effect. Finally, we compute the strong motion at frequencies higher than 1 Hz using the Empirical Green Functions and the source model parameters that better reproduce the recorded shaking up to 1 Hz: the computed seismograms fit satisfactorily the signals recorded at LOR station as well as at the other stations close to the source.
NASA Astrophysics Data System (ADS)
Miura, S.; Ohta, Y.; Ohzono, M.; Kita, S.; Iinuma, T.; Demachi, T.; Tachibana, K.; Nakayama, T.; Hirahara, S.; Suzuki, S.; Sato, T.; Uchida, N.; Hasegawa, A.; Umino, N.
2011-12-01
We propose a source fault model of the large intraslab earthquake with M7.1 deduced from a dense GPS network. The coseismic displacements obtained by GPS data analysis clearly show the spatial pattern specific to intraslab earthquakes not only in the horizontal components but also the vertical ones. A rectangular fault with uniform slip was estimated by a non-linear inversion approach. The results indicate that the simple rectangular fault model can explain the overall features of the observations. The amount of moment released is equivalent to Mw 7.17. The hypocenter depth of the main shock estimated by the Japan Meteorological Agency is slightly deeper than the neutral plane between down-dip compression (DC) and down-dip extension (DE) stress zones of the double-planed seismic zone. This suggests that the depth of the neutral plane was deepened by the huge slip of the 2011 M9.0 Tohoku earthquake, and the rupture of the thrust M7.1 earthquake was initiated at that depth, although more investigations are required to confirm this idea. The estimated fault plane has an angle of ~60 degrees from the surface of subducting Pacific plate. It is consistent with the hypothesis that intraslab earthquakes are thought to be reactivation of the preexisting hydrated weak zones made in bending process of oceanic plates around outer-rise regions.
Estimation of Peak Ground Acceleration (PGA) for Peninsular Malaysia using geospatial approach
NASA Astrophysics Data System (ADS)
Nouri Manafizad, Amir; Pradhan, Biswajeet; Abdullahi, Saleh
2016-06-01
Among the various types of natural disasters, earthquake is considered as one of the most destructive events which impose a great amount of human fatalities and economic losses. Visualization of earthquake events and estimation of peak ground motions provides a strong tool for scientists and authorities to predict and mitigate the aftereffects of earthquakes. In addition it is useful for some businesses like insurance companies to evaluate the amount of investing risk. Although Peninsular Malaysian is situated in the stable part of Sunda plate, it is seismically influenced by very active earthquake sources of Sumatra's fault and subduction zones. This study modelled the seismic zones and estimates maximum credible earthquake (MCE) based on classified data for period 1900 to 2014. The deterministic approach was implemented for the analysis. Attenuation equations were used for two zones. Results show that, the PGA produced from subduction zone is from 2-64 (gal) and from the fault zone varies from 1-191(gal). In addition, the PGA generated from fault zone is more critical than subduction zone for selected seismic model.
Understanding intraplate earthquakes in Sweden: the where and why
NASA Astrophysics Data System (ADS)
Lund, Björn; Tryggvason, Ari; Chan, NeXun; Högdahl, Karin; Buhcheva, Darina; Bödvarsson, Reynir
2016-04-01
The Swedish National Seismic Network (SNSN) underwent a rapid expansion and modernization between the years 2000 - 2010. The number of stations increased from 6 to 65, all broadband or semi-broadband with higher than standard sensitivity and all transmitting data in real-time. This has lead to a significant increase in the number of detected earthquakes, with the magnitude of completeness being approximately ML 0.5 within the network. During the last 15 years some 7,300 earthquakes have been detected and located, which can be compared to the approximately 1,800 earthquakes in the Swedish catalog from 1375 to 1999. We have used the recent earthquake catalog and various antropogenic sources (e.g. mine blasts, quarry blasts and infrastructure construction blast) to derive low resolution 3D P- and S-wave velocity models for entire Sweden. Including the blasts provides a more even geographical distribution of sources as well as good constraints on the locations. The resolution of the derived velocity models is in the 20 km range in the well resolved areas. A fairly robust feature observed in the Vp/Vs ratio of the derived models is a difference between the Paleoproterozoic rocks belonging to the TIB (Transscanidinavian Igneous Belt) and the Svecofennian rocks east and north of this region (a Vp/Vs ratio about 1.72 prevail in the former compared to a value below 1.70 in the latter) at depths down to 15 km. All earthquakes occurring since 2000 have been relocated in the 3D velocity model. The results show very clear differences in how earthquakes occur in different parts of Sweden. In the north, north of approximately 64 degrees latitude, most earthquakes occur on or in the vicinity of the Holocene postglacial faults. From 64N to approximately 60N earthquake activity is concentrated along the northeast coast line, with some relation to the offset in the bedrock from the onshore area to the offshore Bay of Bothnia. In southern Sweden earthquake activity is more widely distributed, with a concentration in a band across Lake Vänern, following the boundary between the TIB and the Sveconorwegian orogenic belt. We identify a number of earthquake lineaments in the country and relate these to very different geological units and boundaries, from old Paleoproterozoic features to the youngest postglacial faults. We show how earthquake depths vary in the different seismically active regions, and identify events occurring down to 40 km depth in the crust. Focal mechanisms show that in much of Sweden strike-slip faulting dominates at seismogenic depths. There are however systematic variations within the country. Inverting the mechanisms for the stress field indicates that the maximum horizontal stress direction is NW-SE, in agreement with ridge-push, in much of the country. We will discuss other possible driving mechanisms, such as the ongoing postglacial rebound.
Tilt precursors before earthquakes on the San Andreas fault, California
Johnston, M.J.S.; Mortensen, C.E.
1974-01-01
An array of 14 biaxial shallow-borehole tiltmeters (at 10-7 radian sensitivity) has been installed along 85 kilometers of the San Andreas fault during the past year. Earthquake-related changes in tilt have been simultaneously observed on up to four independent instruments. At earthquake distances greater than 10 earthquake source dimensions, there are few clear indications of tilt change. For the four instruments with the longest records (>10 months), 26 earthquakes have occurred since July 1973 with at least one instrument closer than 10 source dimensions and 8 earthquakes with more than one instrument within that distance. Precursors in tilt direction have been observed before more than 10 earthquakes or groups of earthquakes, and no similar effect has yet been seen without the occurrence of an earthquake.
NASA Astrophysics Data System (ADS)
Kun, C.
2015-12-01
Studies have shown that estimates of ground motion parameter from ground motion attenuation relationship often greater than the observed value, mainly because multiple ruptures of the big earthquake reduce the source pulse height of source time function. In the absence of real-time data of the station after the earthquake, this paper attempts to make some constraints from the source, to improve the accuracy of shakemaps. Causative fault of Yushu Ms 7.1 earthquake is vertical approximately (dip 83 °), and source process in time and space was dispersive distinctly. Main shock of Yushu Ms7.1 earthquake can be divided into several sub-events based on source process of this earthquake. Magnitude of each sub-events depended on each area under the curve of source pulse of source time function, and location derived from source process of each sub-event. We use ShakeMap method with considering the site effect to generate shakeMap for each sub-event, respectively. Finally, ShakeMaps of mainshock can be aquired from superposition of shakemaps for all the sub-events in space. Shakemaps based on surface rupture of causative Fault from field survey can also be derived for mainshock with only one magnitude. We compare ShakeMaps of both the above methods with Intensity of investigation. Comparisons show that decomposition method of main shock more accurately reflect the shake of earthquake in near-field, but for far field the shake is controlled by the weakening influence of the source, the estimated Ⅵ area was smaller than the intensity of the actual investigation. Perhaps seismic intensity in far-field may be related to the increasing seismic duration for the two events. In general, decomposition method of main shock based on source process, considering shakemap of each sub-event, is feasible for disaster emergency response, decision-making and rapid Disaster Assessment after the earthquake.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr; Türker, Tuğba, E-mail: tturker@ktu.edu.tr
The aim of this study; were determined of the earthquake hazard using the exponential distribution method for different seismic sources of the Ağrı and vicinity. A homogeneous earthquake catalog has been examined for 1900-2015 (the instrumental period) with 456 earthquake data for Ağrı and vicinity. Catalog; Bogazici University Kandilli Observatory and Earthquake Research Institute (Burke), National Earthquake Monitoring Center (NEMC), TUBITAK, TURKNET the International Seismological Center (ISC), Seismological Research Institute (IRIS) has been created using different catalogs like. Ağrı and vicinity are divided into 7 different seismic source regions with epicenter distribution of formed earthquakes in the instrumental period, focalmore » mechanism solutions, and existing tectonic structures. In the study, the average magnitude value are calculated according to the specified magnitude ranges for 7 different seismic source region. According to the estimated calculations for 7 different seismic source regions, the biggest difference corresponding with the classes of determined magnitudes between observed and expected cumulative probabilities are determined. The recurrence period and earthquake occurrence number per year are estimated of occurring earthquakes in the Ağrı and vicinity. As a result, 7 different seismic source regions are determined occurrence probabilities of an earthquake 3.2 magnitude, Region 1 was greater than 6.7 magnitude, Region 2 was greater than than 4.7 magnitude, Region 3 was greater than 5.2 magnitude, Region 4 was greater than 6.2 magnitude, Region 5 was greater than 5.7 magnitude, Region 6 was greater than 7.2 magnitude, Region 7 was greater than 6.2 magnitude. The highest observed magnitude 7 different seismic source regions of Ağrı and vicinity are estimated 7 magnitude in Region 6. Region 6 are determined according to determining magnitudes, occurrence years of earthquakes in the future years, respectively, 7.2 magnitude was in 158 years, 6.7 magnitude was in 70 years, 6.2 magnitude was in 31 years, 5.7 magnitude was in 13 years, 5.2 magnitude was in 6 years.« less
NASA Astrophysics Data System (ADS)
Bayrak, Yusuf; Türker, Tuǧba
2016-04-01
The aim of this study; were determined of the earthquake hazard using the exponential distribution method for different seismic sources of the Aǧrı and vicinity. A homogeneous earthquake catalog has been examined for 1900-2015 (the instrumental period) with 456 earthquake data for Aǧrı and vicinity. Catalog; Bogazici University Kandilli Observatory and Earthquake Research Institute (Burke), National Earthquake Monitoring Center (NEMC), TUBITAK, TURKNET the International Seismological Center (ISC), Seismological Research Institute (IRIS) has been created using different catalogs like. Aǧrı and vicinity are divided into 7 different seismic source regions with epicenter distribution of formed earthquakes in the instrumental period, focal mechanism solutions, and existing tectonic structures. In the study, the average magnitude value are calculated according to the specified magnitude ranges for 7 different seismic source region. According to the estimated calculations for 7 different seismic source regions, the biggest difference corresponding with the classes of determined magnitudes between observed and expected cumulative probabilities are determined. The recurrence period and earthquake occurrence number per year are estimated of occurring earthquakes in the Aǧrı and vicinity. As a result, 7 different seismic source regions are determined occurrence probabilities of an earthquake 3.2 magnitude, Region 1 was greater than 6.7 magnitude, Region 2 was greater than than 4.7 magnitude, Region 3 was greater than 5.2 magnitude, Region 4 was greater than 6.2 magnitude, Region 5 was greater than 5.7 magnitude, Region 6 was greater than 7.2 magnitude, Region 7 was greater than 6.2 magnitude. The highest observed magnitude 7 different seismic source regions of Aǧrı and vicinity are estimated 7 magnitude in Region 6. Region 6 are determined according to determining magnitudes, occurrence years of earthquakes in the future years, respectively, 7.2 magnitude was in 158 years, 6.7 magnitude was in 70 years, 6.2 magnitude was in 31 years, 5.7 magnitude was in 13 years, 5.2 magnitude was in 6 years.
Issues on the Japanese Earthquake Hazard Evaluation
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Fukushima, Y.; Sagiya, T.
2013-12-01
The 2011 Great East Japan Earthquake forced the policy of counter-measurements to earthquake disasters, including earthquake hazard evaluations, to be changed in Japan. Before the March 11, Japanese earthquake hazard evaluation was based on the history of earthquakes that repeatedly occurs and the characteristic earthquake model. The source region of an earthquake was identified and its occurrence history was revealed. Then the conditional probability was estimated using the renewal model. However, the Japanese authorities changed the policy after the megathrust earthquake in 2011 such that the largest earthquake in a specific seismic zone should be assumed on the basis of available scientific knowledge. According to this policy, three important reports were issued during these two years. First, the Central Disaster Management Council issued a new estimate of damages by a hypothetical Mw9 earthquake along the Nankai trough during 2011 and 2012. The model predicts a 34 m high tsunami on the southern Shikoku coast and intensity 6 or higher on the JMA scale in most area of Southwest Japan as the maximum. Next, the Earthquake Research Council revised the long-term earthquake hazard evaluation of earthquakes along the Nankai trough in May 2013, which discarded the characteristic earthquake model and put much emphasis on the diversity of earthquakes. The so-called 'Tokai' earthquake was negated in this evaluation. Finally, another report by the CDMC concluded that, with the current knowledge, it is hard to predict the occurrence of large earthquakes along the Nankai trough using the present techniques, based on the diversity of earthquake phenomena. These reports created sensations throughout the country and local governments are struggling to prepare counter-measurements. These reports commented on large uncertainty in their evaluation near their ends, but are these messages transmitted properly to the public? Earthquake scientists, including authors, are involved in the discussion of these issues as committee members. However, we are wondering if the basis of these reports is scientifically appropriate. For example, there is no established method to evaluate the maximum size of earthquake, whose record is not known, in a specific area, but the committee made an estimate for the Nankai trough by extrapolating available knowledge. The Japanese policy makers further requested the probability of occurrence of such an event, which the committee had to decline because of the lack of knowledge. This example shows that Japanese earthquake scientists sometimes are involved in an important decision-making and are urged to go beyond the limit of earthquake science. We consider this difficult situation is formed on the basis of the history of the Japanese earthquake science and the 'myth of flawless of science' in the government and society, who often ask for a simple answer. Open discussion with people from other fields of science, such as social and human sciences, and the public would be an effective solution for the public to understand the complexity of the problems and to encourage appropriate counter-measures.
Seismogenic structures of the 2006 ML4.0 Dangan Island earthquake offshore Hong Kong
NASA Astrophysics Data System (ADS)
Xia, Shaohong; Cao, Jinghe; Sun, Jinlong; Lv, Jinshui; Xu, Huilong; Zhang, Xiang; Wan, Kuiyuan; Fan, Chaoyan; Zhou, Pengxiang
2018-02-01
The northern margin of the South China Sea, as a typical extensional continental margin, has relatively strong intraplate seismicity. Compared with the active zones of Nanao Island, Yangjiang, and Heyuan, seismicity in the Pearl River Estuary is relatively low. However, a ML4.0 earthquake in 2006 occurred near Dangan Island (DI) offshore Hong Kong, and this site was adjacent to the source of the historical M5.8 earthquake in 1874. To reveal the seismogenic mechanism of intraplate earthquakes in DI, we systematically analyzed the structural characteristics in the source area of the 2006 DI earthquake using integrated 24-channel seismic profiles, onshore-offshore wide-angle seismic tomography, and natural earthquake parameters. We ascertained the locations of NW- and NE-trending faults in the DI sea and found that the NE-trending DI fault mainly dipped southeast at a high angle and cut through the crust with an obvious low-velocity anomaly. The NW-trending fault dipped southwest with a similar high angle. The 2006 DI earthquake was adjacent to the intersection of the NE- and NW-trending faults, which suggested that the intersection of the two faults with different strikes could provide a favorable condition for the generation and triggering of intraplate earthquakes. Crustal velocity model showed that the high-velocity anomaly was imaged in the west of DI, but a distinct entity with low-velocity anomaly in the upper crust and high-velocity anomaly in the lower crust was found in the south of DI. Both the 1874 and 2006 DI earthquakes occurred along the edge of the distinct entity. Two vertical cross-sections nearly perpendicular to the strikes of the intersecting faults revealed good spatial correlations between the 2006 DI earthquake and the low to high speed transition in the distinct entity. This result indicated that the transitional zone might be a weakly structural body that can store strain energy and release it as a brittle failure, resulting in an earthquake-prone area.
Transparent Global Seismic Hazard and Risk Assessment
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Pinho, Rui; Crowley, Helen
2013-04-01
Vulnerability to earthquakes is increasing, yet advanced reliable risk assessment tools and data are inaccessible to most, despite being a critical basis for managing risk. Also, there are few, if any, global standards that allow us to compare risk between various locations. The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange, and leverages the knowledge of leading experts for the benefit of society. Sharing of data and risk information, best practices, and approaches across the globe is key to assessing risk more effectively. Through global projects, open-source IT development and collaborations with more than 10 regions, leading experts are collaboratively developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. Guided by the needs and experiences of governments, companies and citizens at large, they work in continuous interaction with the wider community. A continuously expanding public-private partnership constitutes the GEM Foundation, which drives the collaborative GEM effort. An integrated and holistic approach to risk is key to GEM's risk assessment platform, OpenQuake, that integrates all above-mentioned contributions and will become available towards the end of 2014. Stakeholders worldwide will be able to calculate, visualise and investigate earthquake risk, capture new data and to share their findings for joint learning. Homogenized information on hazard can be combined with data on exposure (buildings, population) and data on their vulnerability, for loss assessment around the globe. Furthermore, for a true integrated view of seismic risk, users can add social vulnerability and resilience indices to maps and estimate the costs and benefits of different risk management measures. The following global data, models and methodologies will be available in the platform. Some of these will be released to the public already before, such as the ISC-GEM global instrumental catalogue (released January 2013). Datasets: • Global Earthquake History Catalogue [1000-1903] • Global Instrumental Catalogue [1900-2009] • Global Geodetic Strain Rate Model • Global Active Fault Database • Tectonic Regionalisation • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerability Database • Socio-Economic Vulnerability and Resilience Indicators Models: • Seismic Source Models • Ground Motion (Attenuation) Models • Physical Exposure Models • Physical Vulnerability Models • Composite Index Models (social vulnerability, resilience, indirect loss) The aforementioned models developed under the GEM framework will be combined to produce estimates of hazard and risk at a global scale. Furthermore, building on many ongoing efforts and knowledge of scientists worldwide, GEM will integrate state-of-the-art data, models, results and open-source tools into a single platform that is to serve as a "clearinghouse" on seismic risk. The platform will continue to increase in value, in particular for use in local contexts, through contributions and collaborations with scientists and organisations worldwide.
NASA Astrophysics Data System (ADS)
Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.
2017-12-01
We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the simulated ground motions will be validated by comparison of simulated response spectra with recorded response spectra and with response spectra from ground motion prediction models. This research is sponsored by the Japan Nuclear Regulation Authority.
Analysis of the tsunami generated by the MW 7.8 1906 San Francisco earthquake
Geist, E.L.; Zoback, M.L.
1999-01-01
We examine possible sources of a small tsunami produced by the 1906 San Francisco earthquake, recorded at a single tide gauge station situated at the opening to San Francisco Bay. Coseismic vertical displacement fields were calculated using elastic dislocation theory for geodetically constrained horizontal slip along a variety of offshore fault geometries. Propagation of the ensuing tsunami was calculated using a shallow-water hydrodynamic model that takes into account the effects of bottom friction. The observed amplitude and negative pulse of the first arrival are shown to be inconsistent with small vertical displacements (~4-6 cm) arising from pure horizontal slip along a continuous right bend in the San Andreas fault offshore. The primary source region of the tsunami was most likely a recently recognized 3 km right step in the San Andreas fault that is also the probable epicentral region for the 1906 earthquake. Tsunami models that include the 3 km right step with pure horizontal slip match the arrival time of the tsunami, but underestimate the amplitude of the negative first-arrival pulse. Both the amplitude and time of the first arrival are adequately matched by using a rupture geometry similar to that defined for the 1995 MW (moment magnitude) 6.9 Kobe earthquake: i.e., fault segments dipping toward each other within the stepover region (83??dip, intersecting at 10 km depth) and a small component of slip in the dip direction (rake=-172??). Analysis of the tsunami provides confirming evidence that the 1906 San Francisco earthquake initiated at a right step in a right-lateral fault and propagated bilaterally, suggesting a rupture initiation mechanism similar to that for the 1995 Kobe earthquake.
The 2013, Mw 7.7 Balochistan earthquake, energetic strike-slip reactivation of a thrust fault
NASA Astrophysics Data System (ADS)
Avouac, Jean-Philippe; Ayoub, Francois; Wei, Shengji; Ampuero, Jean-Paul; Meng, Lingsen; Leprince, Sebastien; Jolivet, Romain; Duputel, Zacharie; Helmberger, Don
2014-04-01
We analyse the Mw 7.7 Balochistan earthquake of 09/24/2013 based on ground surface deformation measured from sub-pixel correlation of Landsat-8 images, combined with back-projection and finite source modeling of teleseismic waveforms. The earthquake nucleated south of the Chaman strike-slip fault and propagated southwestward along the Hoshab fault at the front of the Kech Band. The rupture was mostly unilateral, propagated at 3 km/s on average and produced a 200 km surface fault trace with purely strike-slip displacement peaking to 10 m and averaging around 6 m. The finite source model shows that slip was maximum near the surface. Although the Hoshab fault is dipping by 45° to the North, in accordance with its origin as a thrust fault within the Makran accretionary prism, slip was nearly purely strike-slip during that earthquake. Large seismic slip on such a non-optimally oriented fault was enhanced possibly due to the influence of the free surface on dynamic stresses or to particular properties of the fault zone allowing for strong dynamic weakening. Strike-slip faulting on thrust fault within the eastern Makran is interpreted as due to eastward extrusion of the accretionary prism as it bulges out over the Indian plate. Portions of the Makran megathrust, some thrust faults in the Kirthar range and strike-slip faults within the Chaman fault system have been brought closer to failure by this earthquake. Aftershocks cluster within the Chaman fault system north of the epicenter, opposite to the direction of rupture propagation. By contrast, few aftershocks were detected in the area of maximum moment release. In this example, aftershocks cannot be used to infer earthquake characteristics.
P and S wave Coda Calibration in Central Asia and South Korea
NASA Astrophysics Data System (ADS)
Kim, D.; Mayeda, K.; Gok, R.; Barno, J.; Roman-Nieves, J. I.
2017-12-01
Empirically derived coda source spectra provide unbiased, absolute moment magnitude (Mw) estimates for events that are normally too small for accurate long-period waveform modeling. In this study, we obtain coda-derived source spectra using data from Central Asia (Kyrgyzstan networks - KN and KR, and Tajikistan - TJ) and South Korea (Korea Meteorological Administration, KMA). We used a recently developed coda calibration module of Seismic WaveForm Tool (SWFT). Seismic activities during this recording period include the recent Gyeongju earthquake of Mw=5.3 and its aftershocks, two nuclear explosions from 2009 and 2013 in North Korea, and a small number of construction and mining-related explosions. For calibration, we calculated synthetic coda envelopes for both P and S waves based on a simple analytic expression that fits the observed narrowband filtered envelopes using the method outlined in Mayeda et al. (2003). To provide an absolute scale of the resulting source spectra, path and site corrections are applied using independent spectral constraints (e.g., Mw and stress drop) from three Kyrgyzstan events and the largest events of the Gyeongju sequence in Central Asia and South Korea, respectively. In spite of major tectonic differences, stable source spectra were obtained in both regions. We validated the resulting spectra by comparing the ratio of raw envelopes and source spectra from calibrated envelopes. Spectral shapes of earthquakes and explosions show different patterns in both regions. We also find (1) the source spectra derived from S-coda is more robust than that from the P-coda at low frequencies; (2) unlike earthquake events, the source spectra of explosions have a large disagreement between P and S waves; and (3) similarity is observed between 2016 Gyeongju and 2011 Virginia earthquake sequence in the eastern U.S.
NASA Astrophysics Data System (ADS)
Gümüş, Ayla; Yalım, Hüseyin Ali
2018-02-01
Radon emanation occurs all the rocks and earth containing uranium element. Anomalies in radon concentrations before earthquakes are observed in fault lines, geothermal sources, uranium deposits, volcanic movements. The aim of this study is to investigate the relationship between the radon anomalies in water resources and the radial distances of the sources to the earthquake center. For this purpose, radon concentrations of 9 different deep water sources near Akşehir fault line were determined by taking samples with monthly periods for two years. The relationship between the radon anomalies and the radial distances of the sources to the earthquake center was obtained for the sources.
USGS National Seismic Hazard Maps
Frankel, A.D.; Mueller, C.S.; Barnhard, T.P.; Leyendecker, E.V.; Wesson, R.L.; Harmsen, S.C.; Klein, F.W.; Perkins, D.M.; Dickman, N.C.; Hanson, S.L.; Hopper, M.G.
2000-01-01
The U.S. Geological Survey (USGS) recently completed new probabilistic seismic hazard maps for the United States, including Alaska and Hawaii. These hazard maps form the basis of the probabilistic component of the design maps used in the 1997 edition of the NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, prepared by the Building Seismic Safety Council arid published by FEMA. The hazard maps depict peak horizontal ground acceleration and spectral response at 0.2, 0.3, and 1.0 sec periods, with 10%, 5%, and 2% probabilities of exceedance in 50 years, corresponding to return times of about 500, 1000, and 2500 years, respectively. In this paper we outline the methodology used to construct the hazard maps. There are three basic components to the maps. First, we use spatially smoothed historic seismicity as one portion of the hazard calculation. In this model, we apply the general observation that moderate and large earthquakes tend to occur near areas of previous small or moderate events, with some notable exceptions. Second, we consider large background source zones based on broad geologic criteria to quantify hazard in areas with little or no historic seismicity, but with the potential for generating large events. Third, we include the hazard from specific fault sources. We use about 450 faults in the western United States (WUS) and derive recurrence times from either geologic slip rates or the dating of pre-historic earthquakes from trenching of faults or other paleoseismic methods. Recurrence estimates for large earthquakes in New Madrid and Charleston, South Carolina, were taken from recent paleoliquefaction studies. We used logic trees to incorporate different seismicity models, fault recurrence models, Cascadia great earthquake scenarios, and ground-motion attenuation relations. We present disaggregation plots showing the contribution to hazard at four cities from potential earthquakes with various magnitudes and distances.
A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta
NASA Astrophysics Data System (ADS)
Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.
2015-12-01
Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.
Detecting metastable olivine wedge beneath Japan Sea with deep earthquake coda wave interferometry
NASA Astrophysics Data System (ADS)
Shen, Z.; Zhan, Z.
2017-12-01
It has been hypothesized for decades that the lower-pressure olivine phase would kinetically persist in the interior of slab into the transition zone, forming a low-velocity "Metastable Olivine Wedge" (MOW). MOW, if exists, would play a critical role in generating deep earthquakes and parachuting subducted slabs with its buoyancy. However, seismic evidences for MOW are still controversial, and it is suggested that MOW can only be detected using broadband waveforms given the wavefront healing effects for travel times. On the other hand, broadband waveforms are often complicated by shallow heterogeneities. Here we propose a new method using the source-side interferometry of deep earthquake coda to detect MOW. In this method, deep earthquakes are turned into virtual sensors with the reciprocity theorem, and the transient strain from one earthquake to the other is estimated by cross-correlating the coda from the deep earthquake pair at the same stations. This approach effectively isolates near-source structure from complicated shallow structures, hence provide finer resolution to deep slab structures. We apply this method to Japan subduction zone with Hi-Net data, and our preliminary result does not support a large MOW model (100km thick at 410km) as suggested by several previous studies. Metastable olivine at small scales or distributed in an incoherent manner in deep slabs may still be possible.
NASA Astrophysics Data System (ADS)
Kagawa, T.; Petukhin, A.; Koketsu, K.; Miyake, H.; Murotani, S.; Tsurugi, M.
2010-12-01
Three dimensional velocity structure model of southwest Japan is provided to simulate long-period ground motions due to the hypothetical subduction earthquakes. The model is constructed from numerous physical explorations conducted in land and offshore areas and observational study of natural earthquakes. Any available information is involved to explain crustal structure and sedimentary structure. Figure 1 shows an example of cross section with P wave velocities. The model has been revised through numbers of simulations of small to middle earthquakes as to have good agreement with observed arrival times, amplitudes, and also waveforms including surface waves. Figure 2 shows a comparison between Observed (dash line) and simulated (solid line) waveforms. Low velocity layers have added on seismological basement to reproduce observed records. The thickness of the layer has been adjusted through iterative analysis. The final result is found to have good agreement with the results from other physical explorations; e.g. gravity anomaly. We are planning to make long-period (about 2 to 10 sec or longer) simulations of ground motion due to the hypothetical Nankai Earthquake with the 3-D velocity structure model. As the first step, we will simulate the observed ground motions of the latest event occurred in 1946 to check the source model and newly developed velocity structure model. This project is partly supported by Integrated Research Project for Long-Period Ground Motion Hazard Maps by Ministry of Education, Culture, Sports, Science and Technology (MEXT). The ground motion data used in this study were provided by National Research Institute for Earth Science and Disaster Prevention Disaster (NIED). Figure 1 An example of cross section with P wave velocities Figure 2 Observed (dash line) and simulated (solid line) waveforms due to a small earthquake
High Attenuation Rate for Shallow, Small Earthquakes in Japan
NASA Astrophysics Data System (ADS)
Si, Hongjun; Koketsu, Kazuki; Miyake, Hiroe
2017-09-01
We compared the attenuation characteristics of peak ground accelerations (PGAs) and velocities (PGVs) of strong motion from shallow, small earthquakes that occurred in Japan with those predicted by the equations of Si and Midorikawa (J Struct Constr Eng 523:63-70, 1999). The observed PGAs and PGVs at stations far from the seismic source decayed more rapidly than the predicted ones. The same tendencies have been reported for deep, moderate, and large earthquakes, but not for shallow, moderate, and large earthquakes. This indicates that the peak values of ground motion from shallow, small earthquakes attenuate more steeply than those from shallow, moderate or large earthquakes. To investigate the reason for this difference, we numerically simulated strong ground motion for point sources of M w 4 and 6 earthquakes using a 2D finite difference method. The analyses of the synthetic waveforms suggested that the above differences are caused by surface waves, which are predominant at stations far from the seismic source for shallow, moderate earthquakes but not for shallow, small earthquakes. Thus, although loss due to reflection at the boundaries of the discontinuous Earth structure occurs in all shallow earthquakes, the apparent attenuation rate for a moderate or large earthquake is essentially the same as that of body waves propagating in a homogeneous medium due to the dominance of surface waves.
Coda Q Attenuation and Source Parameters Analysis in North East India Using Local Earthquakes
NASA Astrophysics Data System (ADS)
Mohapatra, A. K.; Mohanty, W. K.; Earthquake Seismology
2010-12-01
Alok Kumar Mohapatra1* and William Kumar Mohanty1 *Corresponding author: alokgpiitkgp@gmail.com 1Department of Geology and Geophysics, Indian Institute of Technology, Kharagpur, West Bengal, India. Pin-721302 ABSTRACT In the present study, the quality factor of coda waves (Qc) and the source parameters has been estimated for the Northeastern India, using the digital data of ten local earthquakes from April 2001 to November 2002. Earthquakes with magnitude range from 3.8 to 4.9 have been taken into account. The time domain coda decay method of a single back scattering model is used to calculate frequency dependent values of Coda Q (Qc) where as, the source parameters like seismic moment(Mo), stress drop, source radius(r), radiant energy(Wo),and strain drop are estimated using displacement amplitude spectrum of body wave using Brune's model. The earthquakes with magnitude range 3.8 to 4.9 have been used for estimation Qc at six central frequencies 1.5 Hz, 3.0 Hz, 6.0 Hz, 9.0 Hz, 12.0 Hz, and 18.0 Hz. In the present work, the Qc value of local earthquakes are estimated to understand the attenuation characteristic, source parameters and tectonic activity of the region. Based on a criteria of homogeneity in the geological characteristics and the constrains imposed by the distribution of available events the study region has been classified into three zones such as the Tibetan Plateau Zone (TPZ), Bengal Alluvium and Arakan-Yuma Zone (BAZ), Shillong Plateau Zone (SPZ). It follows the power law Qc= Qo (f/fo)n where, Qo is the quality factor at the reference frequency (1Hz) fo and n is the frequency parameter which varies from region to region. The mean values of Qc reveals a dependence on frequency, varying from 292.9 at 1.5 Hz to 4880.1 at 18 Hz. Average frequency dependent relationship Qc values obtained of the Northeastern India is 198 f 1.035, while this relationship varies from the region to region such as, Tibetan Plateau Zone (TPZ): Qc= 226 f 1.11, Bengal Alluvium and Arakan-Yuma Zone (BAZ) : Qc= 301 f 0.87, Shillong Plateau Zone (SPZ): Qc=126 fo 0.85. It indicates Northeastern India is seismically active but comparing of all zones in the study region the Shillong Plateau Zone (SPZ): Qc= 126 f 0.85 is seismically most active. Where as the Bengal Alluvium and Arakan-Yuma Zone (BAZ) are less active and out of three the Tibetan Plateau Zone (TPZ)is intermediate active. This study may be useful for the seismic hazard assessment. The estimated seismic moments (Mo), range from 5.98×1020 to 3.88×1023 dyne-cm. The source radii(r) are confined between 152 to 1750 meter, the stress drop ranges between 0.0003×103 bar to 1.04×103 bar, the average radiant energy is 82.57×1018 ergs and the strain drop for the earthquake ranges from 0.00602×10-9 to 2.48×10-9 respectively. The estimated stress drop values for NE India depicts scattered nature of the larger seismic moment value whereas, they show a more systematic nature for smaller seismic moment values. The estimated source parameters are in agreement to previous works in this type of tectonic set up. Key words: Coda wave, Seismic source parameters, Lapse time, single back scattering model, Brune's model, Stress drop and North East India.
Tsunami Source Identification on the 1867 Tsunami Event Based on the Impact Intensity
NASA Astrophysics Data System (ADS)
Wu, T. R.
2014-12-01
The 1867 Keelung tsunami event has drawn significant attention from people in Taiwan. Not only because the location was very close to the 3 nuclear power plants which are only about 20km away from the Taipei city but also because of the ambiguous on the tsunami sources. This event is unique in terms of many aspects. First, it was documented on many literatures with many languages and with similar descriptions. Second, the tsunami deposit was discovered recently. Based on the literatures, earthquake, 7-meter tsunami height, volcanic smoke, and oceanic smoke were observed. Previous studies concluded that this tsunami was generated by an earthquake with a magnitude around Mw7.0 along the Shanchiao Fault. However, numerical results showed that even a Mw 8.0 earthquake was not able to generate a 7-meter tsunami. Considering the steep bathymetry and intense volcanic activities along the Keelung coast, one reasonable hypothesis is that different types of tsunami sources were existed, such as the submarine landslide or volcanic eruption. In order to confirm this scenario, last year we proposed the Tsunami Reverse Tracing Method (TRTM) to find the possible locations of the tsunami sources. This method helped us ruling out the impossible far-field tsunami sources. However, the near-field sources are still remain unclear. This year, we further developed a new method named 'Impact Intensity Analysis' (IIA). In the IIA method, the study area is divided into a sequence of tsunami sources, and the numerical simulations of each source is conducted by COMCOT (Cornell Multi-grid Coupled Tsunami Model) tsunami model. After that, the resulting wave height from each source to the study site is collected and plotted. This method successfully helped us to identify the impact factor from the near-field potential sources. The IIA result (Fig. 1) shows that the 1867 tsunami event was a multi-source event. A mild tsunami was trigged by a Mw7.0 earthquake, and then followed by the submarine landslide or volcanic events. A near-field submarine landslide and landslide at Mien-Hwa Canyon were the most possible scenarios. As for the volcano scenarios, the volcanic eruption located about 10 km away from Keelung with 2.5x108 m3 disturbed water volume might be a candidate. The detailed scenario results will be presented in the full paper.
Earthquake location in island arcs
Engdahl, E.R.; Dewey, J.W.; Fujita, K.
1982-01-01
A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high-velocity lithospheric slab. In application, JHD has the practical advantage that it does not require the specification of a theoretical velocity model for the slab. Considering earthquakes within a 260 km long by 60 km wide section of the Aleutian main thrust zone, our results suggest that the theoretical velocity structure of the slab is presently not sufficiently well known that accurate locations can be obtained independently of locally recorded data. Using a locally recorded earthquake as a calibration event, JHD gave excellent results over the entire section of the main thrust zone here studied, without showing a strong effect that might be attributed to spatially varying source-station anomalies. We also calibrated the ray-tracing method using locally recorded data and obtained results generally similar to those obtained by JHD. ?? 1982.
Hydrothermal response to a volcano-tectonic earthquake swarm, Lassen, California
Ingebritsen, Steven E.; Shelly, David R.; Hsieh, Paul A.; Clor, Laura; P.H. Seward,; Evans, William C.
2015-01-01
The increasing capability of seismic, geodetic, and hydrothermal observation networks allows recognition of volcanic unrest that could previously have gone undetected, creating an imperative to diagnose and interpret unrest episodes. A November 2014 earthquake swarm near Lassen Volcanic National Park, California, which included the largest earthquake in the area in more than 60 years, was accompanied by a rarely observed outburst of hydrothermal fluids. Although the earthquake swarm likely reflects upward migration of endogenous H2O-CO2 fluids in the source region, there is no evidence that such fluids emerged at the surface. Instead, shaking from the modest sized (moment magnitude 3.85) but proximal earthquake caused near-vent permeability increases that triggered increased outflow of hydrothermal fluids already present and equilibrated in a local hydrothermal aquifer. Long-term, multiparametric monitoring at Lassen and other well-instrumented volcanoes enhances interpretation of unrest and can provide a basis for detailed physical modeling.
Response of high-rise and base-isolated buildings to a hypothetical M w 7.0 blind thrust earthquake
Heaton, T.H.; Hall, J.F.; Wald, D.J.; Halling, M.W.
1995-01-01
High-rise flexible-frame buildings are commonly considered to be resistant to shaking from the largest earthquakes. In addition, base isolation has become increasingly popular for critical buildings that should still function after an earthquake. How will these two types of buildings perform if a large earthquake occurs beneath a metropolitan area? To answer this question, we simulated the near-source ground motions of a Mw 7.0 thrust earthquake and then mathematically modeled the response of a 20-story steel-frame building and a 3-story base-isolated building. The synthesized ground motions were characterized by large displacement pulses (up to 2 meters) and large ground velocities. These ground motions caused large deformation and possible collapse of the frame building, and they required exceptional measures in the design of the base-isolated building if it was to remain functional.
Satake, K.; Wang, K.; Atwater, B.F.
2003-01-01
The 1700 Cascadia earthquake attained moment magnitude 9 according to new estimates based on effects of its tsunami in Japan, computed coseismic seafloor deformation for hypothetical ruptures in Cascadia, and tsunami modeling in the Pacific Ocean. Reports of damage and flooding show that the 1700 Casscadia tsunami reached 1-5 m heights at seven shoreline sites in Japan. Three sets of estimated heights express uncertainty about location and depth of reported flooding, landward decline in tsunami heights from shorelines, and post-1700 land-level changes. We compare each set with tsunami heights computed from six Cascadia sources. Each source is vertical seafloor displacement calculated with a three-dimensional elastic dislocation model, for three sources the rupture extends the 1100 km length of the subduction zone and differs in width and shallow dip; for the other sources, ruptures of ordinary width extend 360-670 km. To compute tsunami waveforms, we use a linear long-wave approximation with a finite difference method, and we employ modern bathymetry with nearshore grid spacing as small as 0.4 km. The various combinations of Japanese tsunami heights and Cascadia sources give seismic moment of 1-9 ?? 1022 N m, equivalent to moment magnitude 8.7-9.2. This range excludes several unquantified uncertainties. The most likely earthquake, of moment magnitude 9.0, has 19 m of coseismic slip on an offshore, full-slip zone 1100 km long with linearly decreasing slip on a downdip partial-slip zone. The shorter rupture models require up to 40 m offshore slip and predict land-level changes inconsistent with coastal paleoseismological evidence. Copyright 2003 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.
2014-12-01
Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep rupture off Fukushima at 90-135 s. The dominant-period difference of the seismic-wave radiation between two deep ruptures off Miyagi may result from the mechanism that small-scale heterogeneities on the fault are removed by the first rupture. This difference can be also interpreted by the concept of multi-scale dynamic rupture (Ide & Aochi, 2005).
Near Field Modeling for the Maule Tsunami from DART, GPS and Finite Fault Solutions (Invited)
NASA Astrophysics Data System (ADS)
Arcas, D.; Chamberlin, C.; Lagos, M.; Ramirez-Herrera, M.; Tang, L.; Wei, Y.
2010-12-01
The earthquake and tsunami of February, 27, 2010 in central Chile has rekindled an interest in developing techniques to predict the impact of near field tsunamis along the Chilean coastline. Following the earthquake, several initiatives were proposed to increase the density of seismic, pressure and motion sensors along the South American trench, in order to provide field data that could be used to estimate tsunami impact on the coast. However, the precise use of those data in the elaboration of a quantitative assessment of coastal tsunami damage has not been clarified. The present work makes use of seismic, Deep-ocean Assessment and Reporting of Tsunamis (DART®) systems, and GPS measurements obtained during the Maule earthquake to initiate a number of tsunami inundation models along the rupture area by expressing different versions of the seismic crustal deformation in terms of NOAA’s tsunami unit source functions. Translation of all available real-time data into a feasible tsunami source is essential in near-field tsunami impact prediction in which an impact assessment must be generated under very stringent time constraints. Inundation results from each different source are then contrasted with field and tide gauge data by comparing arrival time, maximum wave height, maximum inundation and tsunami decay rate, using field data collected by the authors.
Ground motion in the presence of complex topography: Earthquake and ambient noise sources
Hartzell, Stephen; Meremonte, Mark; Ramírez-Guzmán, Leonardo; McNamara, Daniel
2014-01-01
To study the influence of topography on ground motion, eight seismic recorders were deployed for a period of one year over Poverty Ridge on the east side of the San Francisco Bay Area, California. This location is desirable because of its proximity to local earthquake sources and the significant topographic relief of the array (439 m). Topographic amplification is evaluated as a function of frequency using a variety of methods, including reference‐site‐based spectral ratios and single‐station horizontal‐to‐vertical spectral ratios using both shear waves from earthquakes and ambient noise. Field observations are compared with the predicted ground motion from an accurate digital model of the topography and a 3D local velocity model. Amplification factors from the theoretical calculations are consistent with observations. The fundamental resonance of the ridge is prominently observed in the spectra of data and synthetics; however, higher‐frequency peaks are also seen primarily for sources in line with the major axis of the ridge, perhaps indicating higher resonant modes. Excitations of lateral ribs off of the main ridge are also seen at frequencies consistent with their dimensions. The favored directions of resonance are shown to be transverse to the major axes of the topographic features.
NASA Astrophysics Data System (ADS)
Partono, Windu; Pardoyo, Bambang; Atmanto, Indrastono Dwi; Azizah, Lisa; Chintami, Rouli Dian
2017-11-01
Fault is one of the dangerous earthquake sources that can cause building failure. A lot of buildings were collapsed caused by Yogyakarta (2006) and Pidie (2016) fault source earthquakes with maximum magnitude 6.4 Mw. Following the research conducted by Team for Revision of Seismic Hazard Maps of Indonesia 2010 and 2016, Lasem, Demak and Semarang faults are three closest earthquake sources surrounding Semarang. The ground motion from those three earthquake sources should be taken into account for structural design and evaluation. Most of tall buildings, with minimum 40 meter high, in Semarang were designed and constructed following the 2002 and 2012 Indonesian Seismic Code. This paper presents the result of sensitivity analysis research with emphasis on the prediction of deformation and inter-story drift of existing tall building within the city against fault earthquakes. The analysis was performed by conducting dynamic structural analysis of 8 (eight) tall buildings using modified acceleration time histories. The modified acceleration time histories were calculated for three fault earthquakes with magnitude from 6 Mw to 7 Mw. The modified acceleration time histories were implemented due to inadequate time histories data caused by those three fault earthquakes. Sensitivity analysis of building against earthquake can be predicted by evaluating surface response spectra calculated using seismic code and surface response spectra calculated from acceleration time histories from a specific earthquake event. If surface response spectra calculated using seismic code is greater than surface response spectra calculated from acceleration time histories the structure will stable enough to resist the earthquake force.
Characterising large scenario earthquakes and their influence on NDSHA maps
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.
2016-04-01
The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can therefore be the factor of two, intrinsic in MCS and other discrete scales. A simple test supports this hypothesis: an increase of 0.5 in the magnitude, i.e. one degrees in epicentral MCS, of all sources used in the national scale seismic zoning produces a doubling of the maximum ground motion. The analysis of uncertainty in ground motion maps, due to the catalogue random errors in magnitude and localization, shows a not uniform distribution of ground shaking uncertainty. The available information from catalogues of past events, that is not complete and may well not be representative of future earthquakes, can be substantially completed using independent indicators of the seismogenic potential of a given area, such as active faulting data and the seismogenic nodes.
Relating stress models of magma emplacement to volcano-tectonic earthquakes
NASA Astrophysics Data System (ADS)
Vargas-Bracamontes, D.; Neuberg, J.
2007-12-01
Among the various types of seismic signals linked to volcanic processes, volcano-tectonic earthquakes are probably the earliest precursors of volcanic eruptions. Understanding their relationship with magma emplacement can provide insight into the mechanisms of magma transport at depth and assist in the ultimate goal of forecasting eruptions. Volcano-tectonic events have been observed to occur on faults that experience increases in Coulomb stress changes as the result of magma intrusions. To simulate stress changes associated with magmatic injections, we test different models of volcanic sources in an elastic half-space. For each source model, we look at several aspects that influence the stress conditions of the magmatic system such as the regional tectonic setting, the effect of varying the elastic parameters of the media, the evolution of the magma with time, as well as the volume and rheology of the ascending magma.
High Resolution Tsunami Modelling for the Evaluation of Potential Risk Areas in Setubal
NASA Astrophysics Data System (ADS)
Ribeiro, João.; Silva, Adélio; Leitão, Paulo
2010-05-01
Modeling has a relevant role in today's natural hazards mitigation planning as it can cover a wide range of natural phenomena. This is also the case for an event like a tsunami. In order to support the urban planning or prepare emergency response plans it is of major importance to be able to properly evaluate the vulnerability associated with different areas and/or equipments. The use of high resolution models can provide relevant information about the most probable inundation areas which complemented with other data such as the type of buildings, location of prioritary equipments, etc., may effectively contribute to better identify the most vulnerable zones, define rescue and escape routes and adequate the emergency plans to the constraints associated to these type of events. In the framework of FP6 SCHEMA project these concepts are being applied to different test sites and a detailed evaluation of the vulnerability of buildings and people to a tsunami event is being evaluated. One of the sites selected it is located in Portugal, in the Atlantic coast, and it refers to Setúbal area which is located about 40 km south of Lisbon. Within this site two specific locations are being evaluated: one is the city of Setúbal (in the Sado estuary right margin) and the other is the Tróia peninsula (in the Sado estuary left margin). Setúbal city is a medium size town with about 114,000 inhabitants while Tróia is a touristic resort located in a shallow area with a high seasonal occupation and has the river Sado as one of the main sources of income to the city. Setúbal was one of the Portuguese villages that was seriously damaged by the of 1755 earthquake event. The 1755 earthquake, also known as the Great Lisbon Earthquake, took place on 1 November 1755, the catholic holiday of All Saints, around 09:30 AM. The earthquake was followed by a tsunami and fires which caused a huge destruction of Lisboa and Setúbal In the framework of the present study, a detailed evaluation of the site vulnerability to a tsunami event based on the consideration of the wave heights, buildings type and access routes characteristics was performed. The wave height and most probable inundation areas was made on the basis of the simulation of three earthquake potential sources with different level of impact (extreme, moderate and weak) in the Setúbal area. In the case of the extreme event the selected source for simulation corresponds to an interpretation of the origins of the 1755 earthquake proposed by Baptista et al (2003).In this study it is suggest that the 1755 tsunami event had two sources: one located in the Marques de Pombal thrust (MPTF) and a second one located in the Guadalquivir Bank. The other two sources are based on a study done by Omira et al (2009) regarding the design of a Sea-level Tsunami Detection Network for the Gulf of Cadiz. In the framework of this study there are analyzed different areas of seismic activity in the South of Portugal and proposed some possible earthquake sources and characteristics. The tsunami propagation simulations were performed using MOHID modelling system which is an open source three-dimensional water modelling system, developed by Hidromod and MARETEC (Marine and Environmental Technology Research Center - Technical University of Lisbon). As a result of the study detailed inundation maps associated to the different events and to different tide levels were produced. As a result of the combination of these maps with the available information of the city infrastructures (building types, roads and streets characteristics, prioritary buildings, etc.) there were also produced high scale vulnerability maps, escape routes, emergency routes maps.
Study on the evaluation method for fault displacement based on characterized source model
NASA Astrophysics Data System (ADS)
Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.
2016-12-01
In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.
Overview of seismic potential in the central and eastern United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schweig, E.S.
1995-12-31
The seismic potential of any region can be framed in terms the locations of source zones, the frequency of earthquake occurrence for each source, and the maximum size earthquake that can be expect from each source. As delineated by modern and historical seismicity, the most important seismic source zones affecting the eastern United States include the New Madrid and Wabash Valley seismic zones of the central U.S., the southern Appalachians and Charleston, South Carolina, areas in the southeast, and the northern Appalachians and Adirondacks in the northeast. The most prominant of these in terms of current seismicity and historical seismicmore » moment release in the New Madrid seismic zone, which produced three earthquakes of moment magnitude {ge} 8 in 1811 and 1812. The frequency of earthquake recurrence can be examined using the instrumental record, the historical record, and the geological record. Each record covers a unique time period and has a different scale of temporal resolution and completeness of the data set. The Wabash Valley is an example where the long-term geological record indicates a greater potential than the instrumental and historical records. This points to the need to examine all of the evidence in any region in order to obtain a credible estimates of earthquake hazards. Although earthquake hazards may be dominated by mid-magnitude 6 earthquakes within the mapped seismic source zones, the 1994 Northridge, California, earthquake is just the most recent example of the danger of assuming future events will occur on faults known to have had past events and how destructive such an earthquake can be.« less
Geist, E.; Yoshioka, S.
1996-01-01
The largest uncertainty in assessing hazards from local tsunamis along the Cascadia margin is estimating the possible earthquake source parameters. We investigate which source parameters exert the largest influence on tsunami generation and determine how each parameter affects the amplitude of the local tsunami. The following source parameters were analyzed: (1) type of faulting characteristic of the Cascadia subduction zone, (2) amount of slip during rupture, (3) slip orientation, (4) duration of rupture, (5) physical properties of the accretionary wedge, and (6) influence of secondary faulting. The effect of each of these source parameters on the quasi-static displacement of the ocean floor is determined by using elastic three-dimensional, finite-element models. The propagation of the resulting tsunami is modeled both near the coastline using the two-dimensional (x-t) Peregrine equations that includes the effects of dispersion and near the source using the three-dimensional (x-y-t) linear long-wave equations. The source parameters that have the largest influence on local tsunami excitation are the shallowness of rupture and the amount of slip. In addition, the orientation of slip has a large effect on the directivity of the tsunami, especially for shallow dipping faults, which consequently has a direct influence on the length of coastline inundated by the tsunami. Duration of rupture, physical properties of the accretionary wedge, and secondary faulting all affect the excitation of tsunamis but to a lesser extent than the shallowness of rupture and the amount and orientation of slip. Assessment of the severity of the local tsunami hazard should take into account that relatively large tsunamis can be generated from anomalous 'tsunami earthquakes' that rupture within the accretionary wedge in comparison to interplate thrust earthquakes of similar magnitude. ?? 1996 Kluwer Academic Publishers.
Revisiting the 1761 Transatlantic Tsunami
NASA Astrophysics Data System (ADS)
Baptista, Maria Ana; Wronna, Martin; Miranda, Jorge Miguel
2016-04-01
The tsunami catalogs of the Atlantic include two transatlantic tsunamis in the 18th century the well known 1st November 1755 and the 31st March 1761. The 31st March 1761 earthquake struck Portugal, Spain, and Morocco. The earthquake occurred around noontime in Lisbon alarming the inhabitants and throwing down ruins of the past 1st November 1755 earthquake. According to several sources, the earthquake was followed by a tsunami observed as far as Cornwall (United Kingdom), Cork (Ireland) and Barbados (Caribbean). The analysis of macroseismic information and its compatibility with tsunami travel time information led to a source area close to the Ampere Seamount with an estimated epicenter circa 34.5°N 13°W. The estimated magnitude of the earthquake was 8.5. In this study, we revisit the tsunami observations, and we include a report from Cadiz not used before. We use the results of the compilation of the multi-beam bathymetric data, that covers the area between 34°N - 38°N and 12.5°W - 5.5°W and use the recent tectonic map published for the Southwest Iberian Margin to select among possible source scenarios. Finally, we use a non-linear shallow water model that includes the discretization and explicit leap-frog finite difference scheme to solve the shallow water equations in the spherical or Cartesian coordinate to compute tsunami waveforms and tsunami inundation and check the results against the historical descriptions to infer the source of the event. This study received funding from project ASTARTE- Assessment Strategy and Risk Reduction for Tsunamis in Europe a collaborative project Grant 603839, FP7-ENV2013 6.4-3
Ground Motion Characteristics of Induced Earthquakes in Central North America
NASA Astrophysics Data System (ADS)
Atkinson, G. M.; Assatourians, K.; Novakovic, M.
2017-12-01
The ground motion characteristics of induced earthquakes in central North America are investigated based on empirical analysis of a compiled database of 4,000,000 digital ground-motion records from events in induced-seismicity regions (especially Oklahoma). Ground-motion amplitudes are characterized non-parametrically by computing median amplitudes and their variability in magnitude-distance bins. We also use inversion techniques to solve for regional source, attenuation and site response effects. Ground motion models are used to interpret the observations and compare the source and attenuation attributes of induced earthquakes to those of their natural counterparts. Significant conclusions are that the stress parameter that controls the strength of high-frequency radiation is similar for induced earthquakes (depth of h 5 km) and shallow (h 5 km) natural earthquakes. By contrast, deeper natural earthquakes (h 10 km) have stronger high-frequency ground motions. At distances close to the epicenter, a greater focal depth (which increases distance from the hypocenter) counterbalances the effects of a larger stress parameter, resulting in motions of similar strength close to the epicenter, regardless of event depth. The felt effects of induced versus natural earthquakes are also investigated using USGS "Did You Feel It?" reports; 400,000 reports from natural events and 100,000 reports from induced events are considered. The felt reports confirm the trends that we expect based on ground-motion modeling, considering the offsetting effects of the stress parameter versus focal depth in controlling the strength of motions near the epicenter. Specifically, felt intensity for a given magnitude is similar near the epicenter, on average, for all event types and depths. At distances more than 10 km from the epicenter, deeper events are felt more strongly than shallow events. These ground-motion attributes imply that the induced-seismicity hazard is most critical for facilities in close proximity (<10 km) to oil and gas operations.
NASA Astrophysics Data System (ADS)
Ross, S.; Jones, L. M.; Wilson, R. I.; Bahng, B.; Barberopoulou, A.; Borrero, J. C.; Brosnan, D.; Bwarie, J. T.; Geist, E. L.; Johnson, L. A.; Hansen, R. A.; Kirby, S. H.; Knight, E.; Knight, W. R.; Long, K.; Lynett, P. J.; Miller, K. M.; Mortensen, C. E.; Nicolsky, D.; Oglesby, D. D.; Perry, S. C.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Suleimani, E. N.; Thio, H. K.; Titov, V. V.; Wein, A. M.; Whitmore, P.; Wood, N. J.
2012-12-01
The U.S. Geological Survey's Science Application for Risk Reduction (SAFRR) project, in collaboration with the California Geological Survey, the California Emergency Management Agency, the National Oceanic and Atmospheric Administration, and other agencies and institutions are developing a Tsunami Scenario to describe in detail the impacts of a tsunami generated by a hypothetical, but realistic, M9 earthquake near the Alaska Peninsula. The overarching objective of SAFRR and its predecessor, the Multi-Hazards Demonstration Project, is to help communities reduce losses from natural disasters. As requested by emergency managers and other community partners, a primary approach has been comprehensive, scientifically credible scenarios that start with a model of a geologic event and extend through estimates of damage, casualties, and societal consequences. The first product was the ShakeOut scenario, addressing a hypothetical earthquake on the southern San Andreas fault, that spawned the successful Great California ShakeOut, an annual event and the nation's largest emergency preparedness exercise. That was followed by the ARkStorm scenario, which addresses California winter storms that surpass hurricanes in their destructive potential. Some of the Tsunami Scenario's goals include developing advanced models of currents and inundation for the event; spurring research related to Alaskan earthquake sources; engaging the port and harbor decision makers; understanding the economic impacts to local, regional and national economy in both the short and long term; understanding the ecological, environmental, and societal impacts of coastal inundation; and creating enhanced communication products for decision-making before, during, and after a tsunami event. The state of California, through CGS and Cal EMA, is using the Tsunami Scenario as an opportunity to evaluate policies regarding tsunami impact. The scenario will serve as a long-lasting resource to teach preparedness and inform decision makers. The SAFRR Tsunami Scenario is organized by a coordinating committee with several working groups, including Earthquake Source, Paleotsunami/Geology Field Work, Tsunami Modeling, Engineering and Physical Impacts, Ecological Impacts, Emergency Management and Education, Social Vulnerability, Economic and Business Impacts, and Policy. In addition, the tsunami scenario process is being assessed and evaluated by researchers from the Natural Hazards Center at the University of Colorado at Boulder. The source event, defined by the USGS' Tsunami Source Working Group, is an earthquake similar to the 2011 Tohoku event, but set in the Semidi subduction sector, between Kodiak Island and the Shumagin Islands off the Pacific coast of the Alaska Peninsula. The Semidi sector is probably late in its earthquake cycle and comparisons of the geology and tectonic settings between Tohoku and the Semidi sector suggest that this location is appropriate. Tsunami modeling and inundation results have been generated for many areas along the California coast and elsewhere, including current velocity modeling for the ports of Los Angeles, Long Beach, and San Diego, and Ventura Harbor. Work on impacts to Alaska and Hawaii will follow. Note: Costas Synolakis (USC) is also an author of this abstract.
Coseismic deformation observed with radar interferometry: Great earthquakes and atmospheric noise
NASA Astrophysics Data System (ADS)
Scott, Chelsea Phipps
Spatially dense maps of coseismic deformation derived from Interferometric Synthetic Aperture Radar (InSAR) datasets result in valuable constraints on earthquake processes. The recent increase in the quantity of observations of coseismic deformation facilitates the examination of signals in many tectonic environments associated with earthquakes of varying magnitude. Efforts to place robust constraints on the evolution of the crustal stress field following great earthquakes often rely on knowledge of the earthquake location, the fault geometry, and the distribution of slip along the fault plane. Well-characterized uncertainties and biases strengthen the quality of inferred earthquake source parameters, particularly when the associated ground displacement signals are near the detection limit. Well-preserved geomorphic records of earthquakes offer additional insight into the mechanical behavior of the shallow crust and the kinematics of plate boundary systems. Together, geodetic and geologic observations of crustal deformation offer insight into the processes that drive seismic cycle deformation over a range of timescales. In this thesis, I examine several challenges associated with the inversion of earthquake source parameters from SAR data. Variations in atmospheric humidity, temperature, and pressure at the timing of SAR acquisitions result in spatially correlated phase delays that are challenging to distinguish from signals of real ground deformation. I characterize the impact of atmospheric noise on inferred earthquake source parameters following elevation-dependent atmospheric corrections. I analyze the spatial and temporal variations in the statistics of atmospheric noise from both reanalysis weather models and InSAR data itself. Using statistics that reflect the spatial heterogeneity of atmospheric characteristics, I examine parameter errors for several synthetic cases of fault slip on a basin-bounding normal fault. I show a decrease in uncertainty in fault geometry and kinematics following the application of atmospheric corrections to an event spanned by real InSAR data, the 1992 M5.6 Little Skull Mountain, Nevada, earthquake. Finally, I discuss how the derived workflow could be applied to other tectonic problems, such as solving for interseismic strain accumulation rates in a subduction zone environment. I also study the evolution of the crustal stress field in the South American plate following two recent great earthquakes along the Nazca- South America subduction zone. I show that the 2010 Mw 8.8 Maule, Chile, earthquake very likely triggered several moderate magnitude earthquakes in the Andean volcanic arc and backarc. This suggests that great earthquakes modulate the crustal stress field outside of the immediate aftershock zone and that far-field faults may pose a heightened hazard following large subduction earthquakes. The 2014 Mw 8.1 Pisagua, Chile, earthquake reopened ancient surface cracks that have been preserved in the hyperarid forearc setting of northern Chile for thousands of earthquake cycles. The orientation of cracks reopened in this event reflects the static and likely dynamic stresses generated by the recent earthquake. Coseismic cracks serve as a reliable marker of permanent earthquake deformation and plate boundary behavior persistent over the million-year timescale. This work on great earthquakes suggests that InSAR observations can play a crucial role in furthering our understanding of the crustal mechanics that drive seismic cycle processes in subduction zones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doser, D.I.
1993-04-01
Source parameters determined from the body waveform modeling of large (M [>=] 5.5) historic earthquakes occurring between 1915 and 1956 along the San Jacinto and Imperial fault zones of southern California and the Cerro Prieto, Tres Hermanas and San Miguel fault zones of Baja California have been combined with information from post-1960's events to study regional variations in source parameters. The results suggest that large earthquakes along the relatively young San Miguel and Tres Hermanas fault zones have complex rupture histories, small source dimensions (< 25 km), high stress drops (60 bar average), and a high incidence of foreshock activity.more » This may be a reflection of the rough, highly segmented nature of the young faults. In contrast, Imperial-Cerro Prieto events of similar magnitude have low stress drops (16 bar average) and longer rupture lengths (42 km average), reflecting rupture along older, smoother fault planes. Events along the San Jacinto fault zone appear to lie in between these two groups. These results suggest a relationship between the structural and seismological properties of strike-slip faults that should be considered during seismic risk studies.« less
Pre-earthquake Magnetic Pulses
NASA Astrophysics Data System (ADS)
Scoville, J.; Heraud, J. A.; Freund, F. T.
2015-12-01
A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earth quakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.
Observations and modeling of the elastogravity signals preceding direct seismic waves.
Vallée, Martin; Ampuero, Jean Paul; Juhel, Kévin; Bernard, Pascal; Montagner, Jean-Paul; Barsuglia, Matteo
2017-12-01
After an earthquake, the earliest deformation signals are not expected to be carried by the fastest ( P ) elastic waves but by the speed-of-light changes of the gravitational field. However, these perturbations are weak and, so far, their detection has not been accurate enough to fully understand their origins and to use them for a highly valuable rapid estimate of the earthquake magnitude. We show that gravity perturbations are particularly well observed with broadband seismometers at distances between 1000 and 2000 kilometers from the source of the 2011, moment magnitude 9.1, Tohoku earthquake. We can accurately model them by a new formalism, taking into account both the gravity changes and the gravity-induced motion. These prompt elastogravity signals open the window for minute time-scale magnitude determination for great earthquakes. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
DSOD Procedures for Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Howard, J. K.; Fraser, W. A.
2005-12-01
DSOD, which has jurisdiction over more than 1200 dams in California, routinely evaluates their dynamic stability using seismic shaking input ranging from simple pseudostatic coefficients to spectrally matched earthquake time histories. Our seismic hazard assessments assume maximum earthquake scenarios of nearest active and conditionally active seismic sources. Multiple earthquake scenarios may be evaluated depending on sensitivity of the design analysis (e.g., to certain spectral amplitudes, duration of shaking). Active sources are defined as those with evidence of movement within the last 35,000 years. Conditionally active sources are those with reasonable expectation of activity, which are treated as active until demonstrated otherwise. The Division's Geology Branch develops seismic hazard estimates using spectral attenuation formulas applicable to California. The formulas were selected, in part, to achieve a site response model similar to the 2000 IBC's for rock, soft rock, and stiff soil sites. The level of dynamic loading used in the stability analysis (50th, 67th, or 84th percentile ground shaking estimates) is determined using a matrix that considers consequence of dam failure and fault slip rate. We account for near-source directivity amplification along such faults by adjusting target response spectra and developing appropriate design earthquakes for analysis of structures sensitive to long-period motion. Based on in-house studies, the orientation of the dam analysis section relative to the fault-normal direction is considered for strike-slip earthquakes, but directivity amplification is assumed in any orientation for dip-slip earthquakes. We do not have probabilistic standards, but we evaluate the probability of our ground shaking estimates using hazard curves constructed from the USGS Interactive De-Aggregation website. Typically, return periods for our design loads exceed 1000 years. Excessive return periods may warrant a lower design load. Minimum shaking levels are provided for sites far from active faulting. Our procedures and standards are presented at the DSOD website http://damsafety.water.ca.gov/. We review our methods and tools periodically under the guidance of our Consulting Board for Earthquake Analysis (and expect to make changes pending NGA completion), mindful that frequent procedural changes can interrupt design evaluations.