Sample records for earthquake rate model

  1. Discrepancy between earthquake rates implied by historic earthquakes and a consensus geologic source model for California

    USGS Publications Warehouse

    Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.

    2000-01-01

    We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the

  2. Prospective Evaluation of the Global Earthquake Activity Rate Model (GEAR1) Earthquake Forecast: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schorlemmer, Danijel; Beutin, Thomas

    2017-04-01

    The Global Earthquake Activity Rate Model (GEAR1) is a hybrid seismicity model, constructed from a loglinear combination of smoothed seismicity from the Global Centroid Moment Tensor (CMT) earthquake catalog and geodetic strain rates (Global Strain Rate Map, version 2.1). For the 2005-2012 retrospective evaluation period, GEAR1 outperformed both parent strain rate and smoothed seismicity forecasts. Since 1. October 2015, GEAR1 has been prospectively evaluated by the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center. Here, we present initial one-year test results of the GEAR1, GSRM and GSRM2.1, as well as localized evaluation of GEAR1 performance. The models were evaluated on the consistency in number (N-test), spatial (S-test) and magnitude (M-test) distribution of forecasted and observed earthquakes, as well as overall data consistency (CL-, L-tests). Performance at target earthquake locations was compared between models using the classical paired T-test and its non-parametric equivalent, the W-test, to determine if one model could be rejected in favor of another at the 0.05 significance level. For the evaluation period from 1. October 2015 to 1. October 2016, the GEAR1, GSRM and GSRM2.1 forecasts pass all CSEP likelihood tests. Comparative test results show statistically significant improvement of GEAR1 performance over both strain rate-based forecasts, both of which can be rejected in favor of GEAR1. Using point process residual analysis, we investigate the spatial distribution of differences in GEAR1, GSRM and GSRM2 model performance, to identify regions where the GEAR1 model should be adjusted, that could not be inferred from CSEP test results. Furthermore, we investigate whether the optimal combination of smoothed seismicity and strain rates remains stable over space and time.

  3. Space-Time Earthquake Rate Models for One-Year Hazard Forecasts in Oklahoma

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Michael, A. J.

    2017-12-01

    The recent one-year seismic hazard assessments for natural and induced seismicity in the central and eastern US (CEUS) (Petersen et al., 2016, 2017) rely on earthquake rate models based on declustered catalogs (i.e., catalogs with foreshocks and aftershocks removed), as is common practice in probabilistic seismic hazard analysis. However, standard declustering can remove over 90% of some induced sequences in the CEUS. Some of these earthquakes may still be capable of causing damage or concern (Petersen et al., 2015, 2016). The choices of whether and how to decluster can lead to seismicity rate estimates that vary by up to factors of 10-20 (Llenos and Michael, AGU, 2016). Therefore, in order to improve the accuracy of hazard assessments, we are exploring ways to make forecasts based on full, rather than declustered, catalogs. We focus on Oklahoma, where earthquake rates began increasing in late 2009 mainly in central Oklahoma and ramped up substantially in 2013 with the expansion of seismicity into northern Oklahoma and southern Kansas. We develop earthquake rate models using the space-time Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988; Ogata, AISM, 1998; Zhuang et al., JASA, 2002), which characterizes both the background seismicity rate as well as aftershock triggering. We examine changes in the model parameters over time, focusing particularly on background rate, which reflects earthquakes that are triggered by external driving forces such as fluid injection rather than other earthquakes. After the model parameters are fit to the seismicity data from a given year, forecasts of the full catalog for the following year can then be made using a suite of 100,000 ETAS model simulations based on those parameters. To evaluate this approach, we develop pseudo-prospective yearly forecasts for Oklahoma from 2013-2016 and compare them with the observations using standard Collaboratory for the Study of Earthquake Predictability tests for consistency.

  4. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  5. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  6. Modeling earthquake rate changes in Oklahoma and Arkansas: possible signatures of induced seismicity

    USGS Publications Warehouse

    Llenos, Andrea L.; Michael, Andrew J.

    2013-01-01

    The rate of ML≥3 earthquakes in the central and eastern United States increased beginning in 2009, particularly in Oklahoma and central Arkansas, where fluid injection has occurred. We find evidence that suggests these rate increases are man‐made by examining the rate changes in a catalog of ML≥3 earthquakes in Oklahoma, which had a low background seismicity rate before 2009, as well as rate changes in a catalog of ML≥2.2 earthquakes in central Arkansas, which had a history of earthquake swarms prior to the start of injection in 2009. In both cases, stochastic epidemic‐type aftershock sequence models and statistical tests demonstrate that the earthquake rate change is statistically significant, and both the background rate of independent earthquakes and the aftershock productivity must increase in 2009 to explain the observed increase in seismicity. This suggests that a significant change in the underlying triggering process occurred. Both parameters vary, even when comparing natural to potentially induced swarms in Arkansas, which suggests that changes in both the background rate and the aftershock productivity may provide a way to distinguish man‐made from natural earthquake rate changes. In Arkansas we also compare earthquake and injection well locations, finding that earthquakes within 6 km of an active injection well tend to occur closer together than those that occur before, after, or far from active injection. Thus, like a change in productivity, a change in interevent distance distribution may also be an indicator of induced seismicity.

  7. Earthquake models using rate and state friction and fast multipoles

    NASA Astrophysics Data System (ADS)

    Tullis, T.

    2003-04-01

    The most realistic current earthquake models employ laboratory-derived non-linear constitutive laws. These are the rate and state friction laws having both a non-linear viscous or direct effect and an evolution effect in which frictional resistance depends on time of stationary contact and has a memory of past slip velocity that fades with slip. The frictional resistance depends on the log of the slip velocity as well as the log of stationary hold time, and the fading memory involves an approximately exponential decay with slip. Due to the nonlinearly of these laws, analytical earthquake models are not attainable and numerical models are needed. The situation is even more difficult if true dynamic models are sought that deal with inertial forces and slip velocities on the order of 1 m/s as are observed during dynamic earthquake slip. Additional difficulties that exist if the dynamic slip phase of earthquakes is modeled arise from two sources. First, many physical processes might operate during dynamic slip, but they are only poorly understood, the relative importance of the processes is unknown, and the processes are even more nonlinear than those described by the current rate and state laws. Constitutive laws describing such behaviors are still being developed. Second, treatment of inertial forces and the influence that dynamic stresses from elastic waves may have on slip on the fault requires keeping track of the history of slip on remote parts of the fault as far into the past as it takes waves to travel from there. This places even more stringent requirements on computer time. Challenges for numerical modeling of complete earthquake cycles are that both time steps and mesh sizes must be small. Time steps must be milliseconds during dynamic slip, and yet models must represent earthquake cycles 100 years or more in length; methods using adaptive step sizes are essential. Element dimensions need to be on the order of meters, both to approximate continuum behavior

  8. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  9. Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2

    USGS Publications Warehouse

    Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.

    2008-01-01

    This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.

  10. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic

  11. Hydromechanical Earthquake Nucleation Model Forecasts Onset, Peak, and Falling Rates of Induced Seismicity in Oklahoma and Kansas

    NASA Astrophysics Data System (ADS)

    Norbeck, J. H.; Rubinstein, J. L.

    2018-04-01

    The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. We develop a reservoir model to calculate the hydrologic conditions associated with the activity of 902 saltwater disposal wells injecting into the Arbuckle aquifer. Estimates of basement fault stressing conditions inform a rate-and-state friction earthquake nucleation model to forecast the seismic response to injection. Our model replicates many salient features of the induced earthquake sequence, including the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. We present evidence for variable time lags between changes in injection and seismicity rates, consistent with the prediction from rate-and-state theory that seismicity rate transients occur over timescales inversely proportional to stressing rate. Given the efficacy of the hydromechanical model, as confirmed through a likelihood statistical test, the results of this study support broader integration of earthquake physics within seismic hazard analysis.

  12. Earthquake Clustering on Normal Faults: Insight from Rate-and-State Friction Models

    NASA Astrophysics Data System (ADS)

    Biemiller, J.; Lavier, L. L.; Wallace, L.

    2016-12-01

    Temporal variations in slip rate on normal faults have been recognized in Hawaii and the Basin and Range. The recurrence intervals of these slip transients range from 2 years on the flanks of Kilauea, Hawaii to 10 kyr timescale earthquake clustering on the Wasatch Fault in the eastern Basin and Range. In addition to these longer recurrence transients in the Basin and Range, recent GPS results there also suggest elevated deformation rate events with recurrence intervals of 2-4 years. These observations suggest that some active normal fault systems are dominated by slip behaviors that fall between the end-members of steady aseismic creep and periodic, purely elastic, seismic-cycle deformation. Recent studies propose that 200 year to 50 kyr timescale supercycles may control the magnitude, timing, and frequency of seismic-cycle earthquakes in subduction zones, where aseismic slip transients are known to play an important role in total deformation. Seismic cycle deformation of normal faults may be similarly influenced by its timing within long-period supercycles. We present numerical models (based on rate-and-state friction) of normal faults such as the Wasatch Fault showing that realistic rate-and-state parameter distributions along an extensional fault zone can give rise to earthquake clusters separated by 500 yr - 5 kyr periods of aseismic slip transients on some portions of the fault. The recurrence intervals of events within each earthquake cluster range from 200 to 400 years. Our results support the importance of stress and strain history as controls on a normal fault's present and future slip behavior and on the characteristics of its current seismic cycle. These models suggest that long- to medium-term fault slip history may influence the temporal distribution, recurrence interval, and earthquake magnitudes for a given normal fault segment.

  13. Accounting for orphaned aftershocks in the earthquake background rate

    USGS Publications Warehouse

    Van Der Elst, Nicholas

    2017-01-01

    Aftershocks often occur within cascades of triggered seismicity in which each generation of aftershocks triggers an additional generation, and so on. The rate of earthquakes in any particular generation follows Omori's law, going approximately as 1/t. This function decays rapidly, but is heavy-tailed, and aftershock sequences may persist for long times at a rate that is difficult to discriminate from background. It is likely that some apparently spontaneous earthquakes in the observational catalogue are orphaned aftershocks of long-past main shocks. To assess the relative proportion of orphaned aftershocks in the apparent background rate, I develop an extension of the ETAS model that explicitly includes the expected contribution of orphaned aftershocks to the apparent background rate. Applying this model to California, I find that the apparent background rate can be almost entirely attributed to orphaned aftershocks, depending on the assumed duration of an aftershock sequence. This implies an earthquake cascade with a branching ratio (the average number of directly triggered aftershocks per main shock) of nearly unity. In physical terms, this implies that very few earthquakes are completely isolated from the perturbing effects of other earthquakes within the fault system. Accounting for orphaned aftershocks in the ETAS model gives more accurate estimates of the true background rate, and more realistic expectations for long-term seismicity patterns.

  14. Accounting for orphaned aftershocks in the earthquake background rate

    NASA Astrophysics Data System (ADS)

    van der Elst, Nicholas J.

    2017-11-01

    Aftershocks often occur within cascades of triggered seismicity in which each generation of aftershocks triggers an additional generation, and so on. The rate of earthquakes in any particular generation follows Omori's law, going approximately as 1/t. This function decays rapidly, but is heavy-tailed, and aftershock sequences may persist for long times at a rate that is difficult to discriminate from background. It is likely that some apparently spontaneous earthquakes in the observational catalogue are orphaned aftershocks of long-past main shocks. To assess the relative proportion of orphaned aftershocks in the apparent background rate, I develop an extension of the ETAS model that explicitly includes the expected contribution of orphaned aftershocks to the apparent background rate. Applying this model to California, I find that the apparent background rate can be almost entirely attributed to orphaned aftershocks, depending on the assumed duration of an aftershock sequence. This implies an earthquake cascade with a branching ratio (the average number of directly triggered aftershocks per main shock) of nearly unity. In physical terms, this implies that very few earthquakes are completely isolated from the perturbing effects of other earthquakes within the fault system. Accounting for orphaned aftershocks in the ETAS model gives more accurate estimates of the true background rate, and more realistic expectations for long-term seismicity patterns.

  15. Foreshock occurrence rates before large earthquakes worldwide

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Global rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured, using earthquakes listed in the Harvard CMT catalog for the period 1978-1996. These rates are similar to rates ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering, which is based on patterns of small and moderate aftershocks in California, and were found to exceed the California model by a factor of approximately 2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events a large majority, composed of events located in shallow subduction zones, registered a high foreshock rate, while a minority, located in continental thrust belts, measured a low rate. These differences may explain why previous surveys have revealed low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggest the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich.

  16. A Comparison of Geodetic and Geologic Rates Prior to Large Strike-Slip Earthquakes: A Diversity of Earthquake-Cycle Behaviors?

    NASA Astrophysics Data System (ADS)

    Dolan, James F.; Meade, Brendan J.

    2017-12-01

    Comparison of preevent geodetic and geologic rates in three large-magnitude (Mw = 7.6-7.9) strike-slip earthquakes reveals a wide range of behaviors. Specifically, geodetic rates of 26-28 mm/yr for the North Anatolian fault along the 1999 MW = 7.6 Izmit rupture are ˜40% faster than Holocene geologic rates. In contrast, geodetic rates of ˜6-8 mm/yr along the Denali fault prior to the 2002 MW = 7.9 Denali earthquake are only approximately half as fast as the latest Pleistocene-Holocene geologic rate of ˜12 mm/yr. In the third example where a sufficiently long pre-earthquake geodetic time series exists, the geodetic and geologic rates along the 2001 MW = 7.8 Kokoxili rupture on the Kunlun fault are approximately equal at ˜11 mm/yr. These results are not readily explicable with extant earthquake-cycle modeling, suggesting that they may instead be due to some combination of regional kinematic fault interactions, temporal variations in the strength of lithospheric-scale shear zones, and/or variations in local relative plate motion rate. Whatever the exact causes of these variable behaviors, these observations indicate that either the ratio of geodetic to geologic rates before an earthquake may not be diagnostic of the time to the next earthquake, as predicted by many rheologically based geodynamic models of earthquake-cycle behavior, or different behaviors characterize different fault systems in a manner that is not yet understood or predictable.

  17. Earthquake Rate Model 2 of the 2007 Working Group for California Earthquake Probabilities, Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2008-01-01

    The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).

  18. Is earthquake rate in south Iceland modified by seasonal loading?

    NASA Astrophysics Data System (ADS)

    Jonsson, S.; Aoki, Y.; Drouin, V.

    2017-12-01

    Several temporarily varying processes have the potential of modifying the rate of earthquakes in the south Iceland seismic zone, one of the two most active seismic zones in Iceland. These include solid earth tides, seasonal meteorological effects and influence from passing weather systems, and variations in snow and glacier loads. In this study we investigate the influence these processes may have on crustal stresses and stressing rates in the seismic zone and assess whether they appear to be influencing the earthquake rate. While historical earthquakes in the south Iceland have preferentially occurred in early summer, this tendency is less clear for small earthquakes. The local earthquake catalogue (going back to 1991, magnitude of completeness < 1.0) has indeed more earthquakes in summer than in winter. However, this pattern is strongly influenced by aftershock sequences of the largest M6+ earthquakes, which occurred in June 2000 and May 2008. Standard Reasenberg earthquake declustering or more involved model independent stochastic declustering algorithms are not capable of fully eliminating the aftershocks from the catalogue. We therefore inspected the catalogue for the time period before 2000 and it shows limited seasonal tendency in earthquake occurrence. Our preliminary results show no clear correlation between earthquake rates and short-term stressing variations induced from solid earth tides or passing storms. Seasonal meteorological effects also appear to be too small to influence the earthquake activity. Snow and glacier load variations induce significant vertical motions in the area with peak loading occurring in Spring (April-May) and maximum unloading in Fall (Sept.-Oct.). Early summer occurrence of historical earthquakes therefore correlates with early unloading rather than with the peak unloading or unloading rate, which appears to indicate limited influence of this seasonal process on the earthquake activity.

  19. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  20. Global observation of Omori-law decay in the rate of triggered earthquakes

    NASA Astrophysics Data System (ADS)

    Parsons, T.

    2001-12-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 events in El Salvador. In this study, earthquakes with M greater than 7.0 from the Harvard CMT catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near the main shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, triggered earthquakes obey an Omori-law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main-shock centroid. Earthquakes triggered by smaller quakes (foreshocks) also obey Omori's law, which is one of the few time-predictable patterns evident in the global occurrence of earthquakes. These observations indicate that earthquake probability calculations which include interactions from previous shocks should incorporate a transient Omori-law decay with time. In addition, a very simple model using the observed global rate change with time and spatial distribution of triggered earthquakes can be applied to immediately assess the likelihood of triggered earthquakes following large events, and can be in place until more sophisticated analyses are conducted.

  1. Earthquake recurrence models fail when earthquakes fail to reset the stress field

    USGS Publications Warehouse

    Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.

    2012-01-01

    Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.

  2. Characterizing potentially induced earthquake rate changes in the Brawley Seismic Zone, southern California

    USGS Publications Warehouse

    Llenos, Andrea L.; Michael, Andrew J.

    2016-01-01

    The Brawley seismic zone (BSZ), in the Salton trough of southern California, has a history of earthquake swarms and geothermal energy exploitation. Some earthquake rate changes may have been induced by fluid extraction and injection activity at local geothermal fields, particularly at the North Brawley Geothermal Field (NBGF) and at the Salton Sea Geothermal Field (SSGF). We explore this issue by examining earthquake rate changes and interevent distance distributions in these fields. In Oklahoma and Arkansas, where considerable wastewater injection occurs, increases in background seismicity rate and aftershock productivity and decreases in interevent distance were indicative of fluid‐injection‐induced seismicity. Here, we test if similar changes occur that may be associated with fluid injection and extraction in geothermal areas. We use stochastic epidemic‐type aftershock sequence models to detect changes in the underlying seismogenic processes, shown by statistically significant changes in the model parameters. The most robust model changes in the SSGF roughly occur when large changes in net fluid production occur, but a similar correlation is not seen in the NBGF. Also, although both background seismicity rate and aftershock productivity increased for fluid‐injection‐induced earthquake rate changes in Oklahoma and Arkansas, the background rate increases significantly in the BSZ only, roughly corresponding with net fluid production rate increases. Moreover, in both fields the interevent spacing does not change significantly during active energy projects. This suggests that, although geothermal field activities in a tectonically active region may not significantly change the physics of earthquake interactions, earthquake rates may still be driven by fluid injection or extraction rates, particularly in the SSGF.

  3. The use of earthquake rate changes as a stress meter at Kilauea volcano.

    PubMed

    Dieterich, J; Cayol, V; Okubo, P

    2000-11-23

    Stress changes in the Earth's crust are generally estimated from model calculations that use near-surface deformation as an observational constraint. But the widespread correlation of changes of earthquake activity with stress has led to suggestions that stress changes might be calculated from earthquake occurrence rates obtained from seismicity catalogues. Although this possibility has considerable appeal, because seismicity data are routinely collected and have good spatial and temporal resolution, the method has not yet proven successful, owing to the non-linearity of earthquake rate changes with respect to both stress and time. Here, however, we present two methods for inverting earthquake rate data to infer stress changes, using a formulation for the stress- and time-dependence of earthquake rates. Application of these methods at Kilauea volcano, in Hawaii, yields good agreement with independent estimates, indicating that earthquake rates can provide a practical remote-sensing stress meter.

  4. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  5. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  6. Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles

    2016-01-01

    Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.

  7. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits.

  8. Detection of change points in underlying earthquake rates, with application to global mega-earthquakes

    NASA Astrophysics Data System (ADS)

    Touati, Sarah; Naylor, Mark; Main, Ian

    2016-02-01

    The recent spate of mega-earthquakes since 2004 has led to speculation of an underlying change in the global `background' rate of large events. At a regional scale, detecting changes in background rate is also an important practical problem for operational forecasting and risk calculation, for example due to volcanic processes, seismicity induced by fluid injection or withdrawal, or due to redistribution of Coulomb stress after natural large events. Here we examine the general problem of detecting changes in background rate in earthquake catalogues with and without correlated events, for the first time using the Bayes factor as a discriminant for models of varying complexity. First we use synthetic Poisson (purely random) and Epidemic-Type Aftershock Sequence (ETAS) models (which also allow for earthquake triggering) to test the effectiveness of many standard methods of addressing this question. These fall into two classes: those that evaluate the relative likelihood of different models, for example using Information Criteria or the Bayes Factor; and those that evaluate the probability of the observations (including extreme events or clusters of events) under a single null hypothesis, for example by applying the Kolmogorov-Smirnov and `runs' tests, and a variety of Z-score tests. The results demonstrate that the effectiveness among these tests varies widely. Information Criteria worked at least as well as the more computationally expensive Bayes factor method, and the Kolmogorov-Smirnov and runs tests proved to be the relatively ineffective in reliably detecting a change point. We then apply the methods tested to events at different thresholds above magnitude M ≥ 7 in the global earthquake catalogue since 1918, after first declustering the catalogue. This is most effectively done by removing likely correlated events using a much lower magnitude threshold (M ≥ 5), where triggering is much more obvious. We find no strong evidence that the background rate of large

  9. Statistical tests of simple earthquake cycle models

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  10. Large earthquake rates from geologic, geodetic, and seismological perspectives

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2017-12-01

    Earthquake rate and recurrence information comes primarily from geology, geodesy, and seismology. Geology gives the longest temporal perspective, but it reveals only surface deformation, relatable to earthquakes only with many assumptions. Geodesy is also limited to surface observations, but it detects evidence of the processes leading to earthquakes, again subject to important assumptions. Seismology reveals actual earthquakes, but its history is too short to capture important properties of very large ones. Unfortunately, the ranges of these observation types barely overlap, so that integrating them into a consistent picture adequate to infer future prospects requires a great deal of trust. Perhaps the most important boundary is the temporal one at the beginning of the instrumental seismic era, about a century ago. We have virtually no seismological or geodetic information on large earthquakes before then, and little geological information after. Virtually all-modern forecasts of large earthquakes assume some form of equivalence between tectonic- and seismic moment rates as functions of location, time, and magnitude threshold. That assumption links geology, geodesy, and seismology, but it invokes a host of other assumptions and incurs very significant uncertainties. Questions include temporal behavior of seismic and tectonic moment rates; shape of the earthquake magnitude distribution; upper magnitude limit; scaling between rupture length, width, and displacement; depth dependence of stress coupling; value of crustal rigidity; and relation between faults at depth and their surface fault traces, to name just a few. In this report I'll estimate the quantitative implications for estimating large earthquake rate. Global studies like the GEAR1 project suggest that surface deformation from geology and geodesy best show the geography of very large, rare earthquakes in the long term, while seismological observations of small earthquakes best forecasts moderate earthquakes

  11. Statistical tests of simple earthquake cycle models

    USGS Publications Warehouse

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM <~ 4.0 × 1019 Pa s and ηM >~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  12. An interdisciplinary approach for earthquake modelling and forecasting

    NASA Astrophysics Data System (ADS)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  13. Strain rates, stress markers and earthquake clustering (Invited)

    NASA Astrophysics Data System (ADS)

    Fry, B.; Gerstenberger, M.; Abercrombie, R. E.; Reyners, M.; Eberhart-Phillips, D. M.

    2013-12-01

    The 2010-present Canterbury earthquakes comprise a well-recorded sequence in a relatively low strain-rate shallow crustal region. We present new scientific results to test the hypothesis that: Earthquake sequences in low-strain rate areas experience high stress drop events, low-post seismic relaxation, and accentuated seismic clustering. This hypothesis is based on a physical description of the aftershock process in which the spatial distribution of stress accumulation and stress transfer are controlled by fault strength and orientation. Following large crustal earthquakes, time dependent forecasts are often developed by fitting parameters defined by Omori's aftershock decay law. In high-strain rate areas, simple forecast models utilizing a single p-value fit observed aftershock sequences well. In low-strain rate areas such as Canterbury, assumptions of simple Omori decay may not be sufficient to capture the clustering (sub-sequence) nature exhibited by the punctuated rise in activity following significant child events. In Canterbury, the moment release is more clustered than in more typical Omori sequences. The individual earthquakes in these clusters also exhibit somewhat higher stress drops than in the average crustal sequence in high-strain rate regions, suggesting the earthquakes occur on strong Andersonian-oriented faults, possibly juvenile or well-healed . We use the spectral ratio procedure outlined in (Viegas et al., 2010) to determine corner frequencies and Madariaga stress-drop values for over 800 events in the sequence. Furthermore, we will discuss the relevance of tomographic results of Reyners and Eberhart-Phillips (2013) documenting post-seismic stress-driven fluid processes following the three largest events in the sequence as well as anisotropic patterns in surface wave tomography (Fry et al., 2013). These tomographic studies are both compatible with the hypothesis, providing strong evidence for the presence of widespread and hydrated regional

  14. The Nazca-South American convergence rate and the recurrence of the great 1960 Chilean earthquake

    NASA Technical Reports Server (NTRS)

    Stein, S.; Engeln, J. F.; Demets, C.; Gordon, R. G.; Woods, D.

    1986-01-01

    The seismic slip rate along the Chile Trench estimated from the slip in the great 1960 earthquake and the recurrence history of major earthquakes has been interpreted as consistent with the subduction rate of the Nazca plate beneath South America. The convergence rate, estimated from global relative plate motion models, depends significantly on closure of the Nazca - Antarctica - South America circuit. NUVEL-1, a new plate motion model which incorporates recently determined spreading rates on the Chile Rise, shows that the average convergence rate over the last three million years is slower than previously estimated. If this time-averaged convergence rate provides an appropriate upper bound for the seismic slip rate, either the characteristic Chilean subduction earthquake is smaller than the 1960 event, the average recurrence interval is greater than observed in the last 400 years, or both. These observations bear out the nonuniformity of plate motions on various time scales, the variability in characteristic subduction zone earthquake size, and the limitations of recurrence time estimates.

  15. Earthquake nucleation on faults with rate-and state-dependent strength

    USGS Publications Warehouse

    Dieterich, J.H.

    1992-01-01

    Dieterich, J.H., 1992. Earthquake nucleation on faults with rate- and state-dependent strength. In: T. Mikumo, K. Aki, M. Ohnaka, L.J. Ruff and P.K.P. Spudich (Editors), Earthquake Source Physics and Earthquake Precursors. Tectonophysics, 211: 115-134. Faults with rate- and state-dependent constitutive properties reproduce a range of observed fault slip phenomena including spontaneous nucleation of slip instabilities at stresses above some critical stress level and recovery of strength following slip instability. Calculations with a plane-strain fault model with spatially varying properties demonstrate that accelerating slip precedes instability and becomes localized to a fault patch. The dimensions of the fault patch follow scaling relations for the minimum critical length for unstable fault slip. The critical length is a function of normal stress, loading conditions and constitutive parameters which include Dc, the characteristic slip distance. If slip starts on a patch that exceeds the critical size, the length of the rapidly accelerating zone tends to shrink to the characteristic size as the time of instability approaches. Solutions have been obtained for a uniform, fixed-patch model that are in good agreement with results from the plane-strain model. Over a wide range of conditions, above the steady-state stress, the logarithm of the time to instability linearly decreases as the initial stress increases. Because nucleation patch length and premonitory displacement are proportional to Dc, the moment of premonitory slip scales by D3c. The scaling of Dc is currently an open question. Unless Dc for earthquake faults is significantly greater than that observed on laboratory faults, premonitory strain arising from the nucleation process for earthquakes may by too small to detect using current observation methods. Excluding the possibility that Dc in the nucleation zone controls the magnitude of the subsequent earthquake, then the source dimensions of the smallest

  16. The failure of earthquake failure models

    USGS Publications Warehouse

    Gomberg, J.

    2001-01-01

    In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.

  17. First Results of the Regional Earthquake Likelihood Models Experiment

    USGS Publications Warehouse

    Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.

    2010-01-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).

  18. First Results of the Regional Earthquake Likelihood Models Experiment

    NASA Astrophysics Data System (ADS)

    Schorlemmer, Danijel; Zechar, J. Douglas; Werner, Maximilian J.; Field, Edward H.; Jackson, David D.; Jordan, Thomas H.

    2010-08-01

    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquake prediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.

  19. Valuation of Indonesian catastrophic earthquake bonds with generalized extreme value (GEV) distribution and Cox-Ingersoll-Ross (CIR) interest rate model

    NASA Astrophysics Data System (ADS)

    Gunardi, Setiawan, Ezra Putranda

    2015-12-01

    Indonesia is a country with high risk of earthquake, because of its position in the border of earth's tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, `act-of-God bond', or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transaction is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake only, the amount of discounted cash flow could determine based on the earthquake's magnitude. A case study with Indonesian earthquake magnitude data show that the probability of maximum magnitude can model by generalized extreme value (GEV) distribution. In pricing this catastrophe bond, we assumed stochastic interest rate that following the Cox-Ingersoll-Ross (CIR) interest rate model. We develop formulas for pricing three types of catastrophe bond, namely zero coupon bonds, `coupon only at risk' bond, and `principal and coupon at risk' bond. Relationship between price of the catastrophe bond and CIR model's parameter, GEV's parameter, percentage of coupon, and discounted cash flow rule then explained via Monte Carlo simulation.

  20. Methodology for earthquake rupture rate estimates of fault networks: example for the western Corinth rift, Greece

    NASA Astrophysics Data System (ADS)

    Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien

    2017-10-01

    Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north

  1. Laboratory constraints on models of earthquake recurrence

    NASA Astrophysics Data System (ADS)

    Beeler, N. M.; Tullis, Terry; Junger, Jenni; Kilgore, Brian; Goldsby, David

    2014-12-01

    In this study, rock friction "stick-slip" experiments are used to develop constraints on models of earthquake recurrence. Constant rate loading of bare rock surfaces in high-quality experiments produces stick-slip recurrence that is periodic at least to second order. When the loading rate is varied, recurrence is approximately inversely proportional to loading rate. These laboratory events initiate due to a slip-rate-dependent process that also determines the size of the stress drop and, as a consequence, stress drop varies weakly but systematically with loading rate. This is especially evident in experiments where the loading rate is changed by orders of magnitude, as is thought to be the loading condition of naturally occurring, small repeating earthquakes driven by afterslip, or low-frequency earthquakes loaded by episodic slip. The experimentally observed stress drops are well described by a logarithmic dependence on recurrence interval that can be cast as a nonlinear slip predictable model. The fault's rate dependence of strength is the key physical parameter. Additionally, even at constant loading rate the most reproducible laboratory recurrence is not exactly periodic, unlike existing friction recurrence models. We present example laboratory catalogs that document the variance and show that in large catalogs, even at constant loading rate, stress drop and recurrence covary systematically. The origin of this covariance is largely consistent with variability of the dependence of fault strength on slip rate. Laboratory catalogs show aspects of both slip and time predictability, and successive stress drops are strongly correlated indicating a "memory" of prior slip history that extends over at least one recurrence cycle.

  2. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    NASA Astrophysics Data System (ADS)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  3. Increased Earthquake Rates in the Central and Eastern US Portend Higher Earthquake Hazards

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Rubinstein, J. L.; Ellsworth, W. L.; Mueller, C. S.; Michael, A. J.; McGarr, A.; Petersen, M. D.; Weingarten, M.; Holland, A. A.

    2014-12-01

    Since 2009 the central and eastern United States has experienced an unprecedented increase in the rate of M≥3 earthquakes that is unlikely to be due to natural variation. Where the rates have increased so has the seismic hazard, making it important to understand these changes. Areas with significant seismicity increases are limited to areas where oil and gas production take place. By far the largest contributor to the seismicity increase is Oklahoma, where recent studies suggest that these rate changes may be due to fluid injection (e.g., Keranen et al., Geology, 2013; Science, 2014). Moreover, the area of increased seismicity in northern Oklahoma that began in 2013 coincides with the Mississippi Lime play, where well completions greatly increased the year before the seismicity increase. This suggests a link to oil and gas production either directly or from the disposal of significant amounts of produced water within the play. For the purpose of assessing the hazard due to these earthquakes, should they be treated differently from natural earthquakes? Previous studies suggest that induced seismicity may differ from natural seismicity in clustering characteristics or frequency-magnitude distributions (e.g., Bachmann et al., GJI, 2011; Llenos and Michael, BSSA, 2013). These differences could affect time-independent hazard computations, which typically assume that clustering and size distribution remain constant. In Oklahoma, as well as other areas of suspected induced seismicity, we find that earthquakes since 2009 tend to be considerably more clustered in space and time than before 2009. However differences between various regional and national catalogs leave unclear whether there are significant changes in magnitude distribution. Whether they are due to natural or industrial causes, the increased earthquake rates in these areas could increase the hazard in ways that are not accounted for in current hazard assessment practice. Clearly the possibility of induced

  4. Earthquake induced variations in extrusion rate: A numerical modeling approach to the 2006 eruption of Merapi Volcano (Indonesia)

    NASA Astrophysics Data System (ADS)

    Carr, Brett B.; Clarke, Amanda B.; de'Michieli Vitturi, Mattia

    2018-01-01

    Extrusion rates during lava dome-building eruptions are variable and eruption sequences at these volcanoes generally have multiple phases. Merapi Volcano, Java, Indonesia, exemplifies this common style of activity. Merapi is one of Indonesia's most active volcanoes and during the 20th and early 21st centuries effusive activity has been characterized by long periods of very slow (<0.1 m3 s-1) extrusion rate interrupted every few years by short episodes of elevated extrusion rates (1-4 m3 s-1) lasting weeks to months. One such event occurred in May-July 2006, and previous research has identified multiple phases with different extrusion rates and styles of activity. Using input values established in the literature, we apply a 1D, isothermal, steady-state numerical model of magma ascent in a volcanic conduit to explain the variations and gain insight into corresponding conduit processes. The peak phase of the 2006 eruption occurred in the two weeks following the May 27 Mw 6.4 earthquake 50 km to the south. Previous work has suggested that the peak extrusion rates observed in early June were triggered by the earthquake through either dynamic stress-induced overpressure or the addition of CO2 due to decarbonation and gas escape from new fractures in the bedrock. We use the numerical model to test the feasibility of these proposed hypotheses and show that, in order to explain the observed change in extrusion rate, an increase of approximately 5-7 MPa in magma storage zone overpressure is required. We also find that the addition of ∼1000 ppm CO2 to some portion of the magma in the storage zone following the earthquake reduces water solubility such that gas exsolution is sufficient to generate the required overpressure. Thus, the proposed mechanism of CO2 addition is a viable explanation for the peak phase of the Merapi 2006 eruption. A time-series of extrusion rate shows a sudden increase three days following the earthquake. We explain this three-day delay by the

  5. Seismic hazard in Hawaii: High rate of large earthquakes and probabilistics ground-motion maps

    USGS Publications Warehouse

    Klein, F.W.; Frankel, A.D.; Mueller, C.S.; Wesson, R.L.; Okubo, P.G.

    2001-01-01

    The seismic hazard and earthquake occurrence rates in Hawaii are locally as high as that near the most hazardous faults elsewhere in the United States. We have generated maps of peak ground acceleration (PGA) and spectral acceleration (SA) (at 0.2, 0.3 and 1.0 sec, 5% critical damping) at 2% and 10% exceedance probabilities in 50 years. The highest hazard is on the south side of Hawaii Island, as indicated by the MI 7.0, MS 7.2, and MI 7.9 earthquakes, which occurred there since 1868. Probabilistic values of horizontal PGA (2% in 50 years) on Hawaii's south coast exceed 1.75g. Because some large earthquake aftershock zones and the geometry of flank blocks slipping on subhorizontal decollement faults are known, we use a combination of spatially uniform sources in active flank blocks and smoothed seismicity in other areas to model seismicity. Rates of earthquakes are derived from magnitude distributions of the modem (1959-1997) catalog of the Hawaiian Volcano Observatory's seismic network supplemented by the historic (1868-1959) catalog. Modern magnitudes are ML measured on a Wood-Anderson seismograph or MS. Historic magnitudes may add ML measured on a Milne-Shaw or Bosch-Omori seismograph or MI derived from calibrated areas of MM intensities. Active flank areas, which by far account for the highest hazard, are characterized by distributions with b slopes of about 1.0 below M 5.0 and about 0.6 above M 5.0. The kinked distribution means that large earthquake rates would be grossly under-estimated by extrapolating small earthquake rates, and that longer catalogs are essential for estimating or verifying the rates of large earthquakes. Flank earthquakes thus follow a semicharacteristic model, which is a combination of background seismicity and an excess number of large earthquakes. Flank earthquakes are geometrically confined to rupture zones on the volcano flanks by barriers such as rift zones and the seaward edge of the volcano, which may be expressed by a magnitude

  6. Characteristics of broadband slow earthquakes explained by a Brownian model

    NASA Astrophysics Data System (ADS)

    Ide, S.; Takeo, A.

    2017-12-01

    Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic

  7. Laboratory constraints on models of earthquake recurrence

    USGS Publications Warehouse

    Beeler, Nicholas M.; Tullis, Terry; Junger, Jenni; Kilgore, Brian D.; Goldsby, David L.

    2014-01-01

    In this study, rock friction ‘stick-slip’ experiments are used to develop constraints on models of earthquake recurrence. Constant-rate loading of bare rock surfaces in high quality experiments produces stick-slip recurrence that is periodic at least to second order. When the loading rate is varied, recurrence is approximately inversely proportional to loading rate. These laboratory events initiate due to a slip rate-dependent process that also determines the size of the stress drop [Dieterich, 1979; Ruina, 1983] and as a consequence, stress drop varies weakly but systematically with loading rate [e.g., Gu and Wong, 1991; Karner and Marone, 2000; McLaskey et al., 2012]. This is especially evident in experiments where the loading rate is changed by orders of magnitude, as is thought to be the loading condition of naturally occurring, small repeating earthquakes driven by afterslip, or low-frequency earthquakes loaded by episodic slip. As follows from the previous studies referred to above, experimentally observed stress drops are well described by a logarithmic dependence on recurrence interval that can be cast as a non-linear slip-predictable model. The fault’s rate dependence of strength is the key physical parameter. Additionally, even at constant loading rate the most reproducible laboratory recurrence is not exactly periodic, unlike existing friction recurrence models. We present example laboratory catalogs that document the variance and show that in large catalogs, even at constant loading rate, stress drop and recurrence co-vary systematically. The origin of this covariance is largely consistent with variability of the dependence of fault strength on slip rate. Laboratory catalogs show aspects of both slip and time predictability and successive stress drops are strongly correlated indicating a ‘memory’ of prior slip history that extends over at least one recurrence cycle.

  8. Seismic Moment, Seismic Energy, and Source Duration of Slow Earthquakes: Application of Brownian slow earthquake model to three major subduction zones

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Maury, Julie

    2018-04-01

    Tectonic tremors, low-frequency earthquakes, very low-frequency earthquakes, and slow slip events are all regarded as components of broadband slow earthquakes, which can be modeled as a stochastic process using Brownian motion. Here we show that the Brownian slow earthquake model provides theoretical relationships among the seismic moment, seismic energy, and source duration of slow earthquakes and that this model explains various estimates of these quantities in three major subduction zones: Japan, Cascadia, and Mexico. While the estimates for these three regions are similar at the seismological frequencies, the seismic moment rates are significantly different in the geodetic observation. This difference is ascribed to the difference in the characteristic times of the Brownian slow earthquake model, which is controlled by the width of the source area. We also show that the model can include non-Gaussian fluctuations, which better explains recent findings of a near-constant source duration for low-frequency earthquake families.

  9. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  10. Why earthquakes correlate weakly with the solid Earth tides: Effects of periodic stress on the rate and probability of earthquake occurrence

    USGS Publications Warehouse

    Beeler, N.M.; Lockner, D.A.

    2003-01-01

    We provide an explanation why earthquake occurrence does not correlate well with the daily solid Earth tides. The explanation is derived from analysis of laboratory experiments in which faults are loaded to quasiperiodic failure by the combined action of a constant stressing rate, intended to simulate tectonic loading, and a small sinusoidal stress, analogous to the Earth tides. Event populations whose failure times correlate with the oscillating stress show two modes of response; the response mode depends on the stressing frequency. Correlation that is consistent with stress threshold failure models, e.g., Coulomb failure, results when the period of stress oscillation exceeds a characteristic time tn; the degree of correlation between failure time and the phase of the driving stress depends on the amplitude and frequency of the stress oscillation and on the stressing rate. When the period of the oscillating stress is less than tn, the correlation is not consistent with threshold failure models, and much higher stress amplitudes are required to induce detectable correlation with the oscillating stress. The physical interpretation of tn is the duration of failure nucleation. Behavior at the higher frequencies is consistent with a second-order dependence of the fault strength on sliding rate which determines the duration of nucleation and damps the response to stress change at frequencies greater than 1/tn. Simple extrapolation of these results to the Earth suggests a very weak correlation of earthquakes with the daily Earth tides, one that would require >13,000 earthquakes to detect. On the basis of our experiments and analysis, the absence of definitive daily triggering of earthquakes by the Earth tides requires that for earthquakes, tn exceeds the daily tidal period. The experiments suggest that the minimum typical duration of earthquake nucleation on the San Andreas fault system is ???1 year.

  11. Benefits of multidisciplinary collaboration for earthquake casualty estimation models: recent case studies

    NASA Astrophysics Data System (ADS)

    So, E.

    2010-12-01

    Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from

  12. Increases in seismicity rate in the Tokyo Metropolitan area after the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Ishibe, T.; Satake, K.; Sakai, S.; Shimazaki, K.; Tsuruoka, H.; Nakagawa, S.; Hirata, N.

    2013-12-01

    Abrupt increases in seismicity rate have been observed in the Kanto region, where the Tokyo Metropolitan area is located, after the 2011 off the Pacific coast of Tohoku earthquake (M9.0) on March 11, 2011. They are well explained by the static increases in the Coulomb Failure Function (ΔCFF) imparted by the gigantic thrusting while some other possible factors (e.g., dynamic stress changes, excess of fluid dehydration, post-seismic slip) may also contribute the rate changes. Because of various types of earthquakes with different focal mechanisms occur in the Kanto region, the receiver faults for the calculation of ΔCFF were assumed to be two nodal planes of small earthquakes before and after the Tohoku earthquake. The regions where seismicity rate increased after the Tohoku earthquake well correlate with concentration on positive ΔCFF (i.e., southwestern Ibaraki and northern Chiba prefectures where intermediate-depth earthquakes occur, and in the shallow crust of western Kanagawa, eastern Shizuoka, and southeastern Yamanashi including the Izu and Hakone regions). The seismicity rate has increased since March 11, 2011 with respect to the Epidemic Type Aftershock Sequence (ETAS) model (Ogata, 1988), suggesting that the rate increase was due to the stress increase by the Tohoku earthquake. Furthermore, the z-values immediately after the Tohoku earthquake show the minimum values during the recent 10 years, indicating significant increases in seismicity rate. At intermediate depth, abrupt increases in thrust faulting earthquakes are well consistent with the Coulomb stress increase. At shallow depth, the earthquakes with the T-axes of roughly NE-SW were activated probably due to the E-W extension of the overriding continental plate, and this is also well explained by the Coulomb stress increase. However, the activated seismicity in the Izu and Hakone regions rapidly decayed following the Omori-Utsu formula, while the increased rate of seismicity in the southwestern

  13. An empirical model for earthquake probabilities in the San Francisco Bay region, California, 2002-2031

    USGS Publications Warehouse

    Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.

    2003-01-01

    The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the

  14. An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution

    NASA Astrophysics Data System (ADS)

    Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan

    2013-04-01

    The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently

  15. Nanoseismicity and picoseismicity rate changes from static stress triggering caused by a Mw 2.2 earthquake in Mponeng gold mine, South Africa

    NASA Astrophysics Data System (ADS)

    Kozłowska, Maria; Orlecka-Sikora, Beata; Kwiatek, Grzegorz; Boettcher, Margaret S.; Dresen, Georg

    2015-01-01

    Static stress changes following large earthquakes are known to affect the rate and distribution of aftershocks, yet this process has not been thoroughly investigated for nanoseismicity and picoseismicity at centimeter length scales. Here we utilize a unique data set of M ≥ -3.4 earthquakes following a Mw 2.2 earthquake in Mponeng gold mine, South Africa, that was recorded during a quiet interval in the mine to investigate if rate- and state-based modeling is valid for shallow, mining-induced seismicity. We use Dieterich's (1994) rate- and state-dependent formulation for earthquake productivity, which requires estimation of four parameters: (1) Coulomb stress changes due to the main shock, (2) the reference seismicity rate, (3) frictional resistance parameter, and (4) the duration of aftershock relaxation time. Comparisons of the modeled spatiotemporal patterns of seismicity based on two different source models with the observed distribution show that while the spatial patterns match well, the rate of modeled aftershocks is lower than the observed rate. To test our model, we used three metrics of the goodness-of-fit evaluation. The null hypothesis, of no significant difference between modeled and observed seismicity rates, was only rejected in the depth interval containing the main shock. Results show that mining-induced earthquakes may be followed by a stress relaxation expressed through aftershocks located on the rupture plane and in regions of positive Coulomb stress change. Furthermore, we demonstrate that the main features of the temporal and spatial distributions of very small, mining-induced earthquakes can be successfully determined using rate- and state-based stress modeling.

  16. Valuation of Indonesian catastrophic earthquake bonds with generalized extreme value (GEV) distribution and Cox-Ingersoll-Ross (CIR) interest rate model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunardi,; Setiawan, Ezra Putranda

    Indonesia is a country with high risk of earthquake, because of its position in the border of earth’s tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, ‘act-of-God bond’, or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transactionmore » is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake only, the amount of discounted cash flow could determine based on the earthquake’s magnitude. A case study with Indonesian earthquake magnitude data show that the probability of maximum magnitude can model by generalized extreme value (GEV) distribution. In pricing this catastrophe bond, we assumed stochastic interest rate that following the Cox-Ingersoll-Ross (CIR) interest rate model. We develop formulas for pricing three types of catastrophe bond, namely zero coupon bonds, ‘coupon only at risk’ bond, and ‘principal and coupon at risk’ bond. Relationship between price of the catastrophe bond and CIR model’s parameter, GEV’s parameter, percentage of coupon, and discounted cash flow rule then explained via Monte Carlo simulation.« less

  17. Assessing a 3D smoothed seismicity model of induced earthquakes

    NASA Astrophysics Data System (ADS)

    Zechar, Jeremy; Király, Eszter; Gischig, Valentin; Wiemer, Stefan

    2016-04-01

    As more energy exploration and extraction efforts cause earthquakes, it becomes increasingly important to control induced seismicity. Risk management schemes must be improved and should ultimately be based on near-real-time forecasting systems. With this goal in mind, we propose a test bench to evaluate models of induced seismicity based on metrics developed by the CSEP community. To illustrate the test bench, we consider a model based on the so-called seismogenic index and a rate decay; to produce three-dimensional forecasts, we smooth past earthquakes in space and time. We explore four variants of this model using the Basel 2006 and Soultz-sous-Forêts 2004 datasets to make short-term forecasts, test their consistency, and rank the model variants. Our results suggest that such a smoothed seismicity model is useful for forecasting induced seismicity within three days, and giving more weight to recent events improves forecast performance. Moreover, the location of the largest induced earthquake is forecast well by this model. Despite the good spatial performance, the model does not estimate the seismicity rate well: it frequently overestimates during stimulation and during the early post-stimulation period, and it systematically underestimates around shut-in. In this presentation, we also describe a robust estimate of information gain, a modification that can also benefit forecast experiments involving tectonic earthquakes.

  18. Statistical analysis of earthquakes after the 1999 MW 7.7 Chi-Chi, Taiwan, earthquake based on a modified Reasenberg-Jones model

    NASA Astrophysics Data System (ADS)

    Chen, Yuh-Ing; Huang, Chi-Shen; Liu, Jann-Yenq

    2015-12-01

    We investigated the temporal-spatial hazard of the earthquakes after the 1999 September 21 MW = 7.7 Chi-Chi shock in a continental region of Taiwan. The Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989, 1994) that combines the frequency-magnitude distribution (Gutenberg and Richter, 1944) and time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed for assessing the earthquake hazard after a large shock. However, it is found that the b values in the frequency-magnitude distribution of the earthquakes in the study region dramatically decreased from background values after the Chi-Chi shock, and then gradually increased up. The observation of a time-dependent frequency-magnitude distribution motivated us to propose a modified RJ model (MRJ) to assess the earthquake hazard. To see how the models perform on assessing short-term earthquake hazard, the RJ and MRJ models were separately used to sequentially forecast earthquakes in the study region. To depict the potential rupture area for future earthquakes, we further constructed relative hazard (RH) maps based on the two models. The Receiver Operating Characteristics (ROC) curves (Swets, 1988) finally demonstrated that the RH map based on the MRJ model was, in general, superior to the one based on the original RJ model for exploring the spatial hazard of earthquakes in a short time after the Chi-Chi shock.

  19. Quantifying the Earthquake Clustering that Independent Sources with Stationary Rates (as Included in Current Risk Models) Can Produce.

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Nyst, M.; Apel, E. V.; Muir-Wood, R.

    2014-12-01

    The recent Canterbury earthquake sequence (CES) renewed public and academic awareness concerning the clustered nature of seismicity. Multiple event occurrence in short time and space intervals is reminiscent of aftershock sequences, but aftershock is a statistical definition, not a label one can give an earthquake in real-time. Aftershocks are defined collectively as what creates the Omori event rate decay after a large event or are defined as what is taken away as "dependent events" using a declustering method. It is noteworthy that depending on the declustering method used on the Canterbury earthquake sequence, the number of independent events varies a lot. This lack of unambiguous definition of aftershocks leads to the need to investigate the amount of clustering inherent in "declustered" risk models. This is the task we concentrate on in this contribution. We start from a background source model for the Canterbury region, in which 1) centroids of events of given magnitude are distributed using a latin-hypercube lattice, 2) following the range of preferential orientations determined from stress maps and focal mechanism, 3) with length determined using the local scaling relationship and 4) rates from a and b values derived from the declustered pre-2010 catalog. We then proceed to create tens of thousands of realizations of 6 to 20 year periods, and we define criteria to identify which successions of events in the region would be perceived as a sequence. Note that the spatial clustering expected is a lower end compared to a fully uniform distribution of events. Then we perform the same exercise with rates and b-values determined from the catalog including the CES. If the pre-2010 catalog was long (or rich) enough, then the computed "stationary" rates calculated from it would include the CES declustered events (by construction, regardless of the physical meaning of or relationship between those events). In regions of low seismicity rate (e.g., Canterbury before

  20. The Global Earthquake Model - Past, Present, Future

    NASA Astrophysics Data System (ADS)

    Smolka, Anselm; Schneider, John; Stein, Ross

    2014-05-01

    The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic

  1. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  2. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models: 2. Laboratory earthquakes

    NASA Astrophysics Data System (ADS)

    Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.

    2012-02-01

    The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.

  3. Statistical analysis of seismicity rate change in the Tokyo Metropolitan area due to the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Ishibe, T.; Sakai, S.; Shimazaki, K.; Satake, K.; Tsuruoka, H.; Nakagawa, S.; Hirata, N.

    2012-12-01

    We examined a relationship between the Coulomb Failure Function (ΔCFF) due to the Tohoku earthquake (March 11, 2011; MJMA 9.0) and the seismicity rate change in Tokyo Metropolitan area following March 2011. Because of large variation in focal mechanism in the Kanto region, the receiver faults for the ΔCFF were assumed to be two nodal planes of small (M ≥ 2.0) earthquakes which occurred before and after the Tohoku earthquake. The seismicity rate changes, particularly the rate increase, are well explained by ΔCFF due to the gigantic thrusting, while some other possible factors (e.g., dynamic stress changes, excess of fluid dehydration) may also contribute the rate changes. Among 30,746 previous events provided by the National Research Institute for Earth Science and Disaster Prevention (M ≥ 2.0, July 1979 - July 2003), we used as receiver faults, almost 16,000 events indicate significant increase in ΔCFF, while about 8,000 events show significant decrease. Positive ΔCFF predicts seismicity rate increase in southwestern Ibaraki and northern Chiba prefectures where intermediate-depth earthquakes occur, and in shallow crust of the Izu-Oshima and Hakone regions. In these regions, seismicity rates significantly increased after the Tohoku earthquake. The seismicity has increased since March 2011 with respect to the Epidemic Type of Aftershock Sequence (ETAS) model (Ogata, 1988), indicating that the rate change was due to the stress increase by the Tohoku earthquake. The activated seismicity in the Izu and Hakone regions rapidly decayed following the Omori-Utsu formula, while the increased rate of seismicity in the southwestern Ibaraki and northern Chiba prefectures is still continuing. We also calculated ΔCFF due to the 2011 Tohoku earthquake for the focal mechanism solutions of earthquakes between April 2008 and October 2011 recorded on the Metropolitan Seismic Observation network (MeSO-net). The ΔCFF values for the earthquakes after March 2011 show more

  4. Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S

  5. Earthquake potential in California-Nevada implied by correlation of strain rate and seismicity

    USGS Publications Warehouse

    Zeng, Yuehua; Petersen, Mark D.; Shen, Zheng-Kang

    2018-01-01

    Rock mechanics studies and dynamic earthquake simulations show that patterns of seismicity evolve with time through (1) accumulation phase, (2) localization phase, and (3) rupture phase. We observe a similar pattern of changes in seismicity during the past century across California and Nevada. To quantify these changes, we correlate GPS strain rates with seismicity. Earthquakes of M > 6.5 are collocated with regions of highest strain rates. By contrast, smaller magnitude earthquakes of M ≥ 4 show clear spatiotemporal changes. From 1933 to the late 1980s, earthquakes of M ≥ 4 were more diffused and broadly distributed in both high and low strain rate regions (accumulation phase). From the late 1980s to 2016, earthquakes were more concentrated within the high strain rate areas focused on the major fault strands (localization phase). In the same time period, the rate of M > 6.5 events also increased significantly in the high strain rate areas. The strong correlation between current strain rate and the later period of seismicity indicates that seismicity is closely related to the strain rate. The spatial patterns suggest that before the late 1980s, the strain rate field was also broadly distributed because of the stress shadows from previous large earthquakes. As the deformation field evolved out of the shadow in the late 1980s, strain has refocused on the major fault systems and we are entering a period of increased risk for large earthquakes in California.

  6. Earthquake Potential in California-Nevada Implied by Correlation of Strain Rate and Seismicity

    NASA Astrophysics Data System (ADS)

    Zeng, Yuehua; Petersen, Mark D.; Shen, Zheng-Kang

    2018-02-01

    Rock mechanics studies and dynamic earthquake simulations show that patterns of seismicity evolve with time through (1) accumulation phase, (2) localization phase, and (3) rupture phase. We observe a similar pattern of changes in seismicity during the past century across California and Nevada. To quantify these changes, we correlate GPS strain rates with seismicity. Earthquakes of M > 6.5 are collocated with regions of highest strain rates. By contrast, smaller magnitude earthquakes of M ≥ 4 show clear spatiotemporal changes. From 1933 to the late 1980s, earthquakes of M ≥ 4 were more diffused and broadly distributed in both high and low strain rate regions (accumulation phase). From the late 1980s to 2016, earthquakes were more concentrated within the high strain rate areas focused on the major fault strands (localization phase). In the same time period, the rate of M > 6.5 events also increased significantly in the high strain rate areas. The strong correlation between current strain rate and the later period of seismicity indicates that seismicity is closely related to the strain rate. The spatial patterns suggest that before the late 1980s, the strain rate field was also broadly distributed because of the stress shadows from previous large earthquakes. As the deformation field evolved out of the shadow in the late 1980s, strain has refocused on the major fault systems and we are entering a period of increased risk for large earthquakes in California.

  7. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may

  8. High Attenuation Rate for Shallow, Small Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Si, Hongjun; Koketsu, Kazuki; Miyake, Hiroe

    2017-09-01

    We compared the attenuation characteristics of peak ground accelerations (PGAs) and velocities (PGVs) of strong motion from shallow, small earthquakes that occurred in Japan with those predicted by the equations of Si and Midorikawa (J Struct Constr Eng 523:63-70, 1999). The observed PGAs and PGVs at stations far from the seismic source decayed more rapidly than the predicted ones. The same tendencies have been reported for deep, moderate, and large earthquakes, but not for shallow, moderate, and large earthquakes. This indicates that the peak values of ground motion from shallow, small earthquakes attenuate more steeply than those from shallow, moderate or large earthquakes. To investigate the reason for this difference, we numerically simulated strong ground motion for point sources of M w 4 and 6 earthquakes using a 2D finite difference method. The analyses of the synthetic waveforms suggested that the above differences are caused by surface waves, which are predominant at stations far from the seismic source for shallow, moderate earthquakes but not for shallow, small earthquakes. Thus, although loss due to reflection at the boundaries of the discontinuous Earth structure occurs in all shallow earthquakes, the apparent attenuation rate for a moderate or large earthquake is essentially the same as that of body waves propagating in a homogeneous medium due to the dominance of surface waves.

  9. Slow-Slip Phenomena Represented by the One-Dimensional Burridge-Knopoff Model of Earthquakes

    NASA Astrophysics Data System (ADS)

    Kawamura, Hikaru; Yamamoto, Maho; Ueda, Yushi

    2018-05-01

    Slow-slip phenomena, including afterslips and silent earthquakes, are studied using a one-dimensional Burridge-Knopoff model that obeys the rate-and-state dependent friction law. By varying only a few model parameters, this simple model allows reproducing a variety of seismic slips within a single framework, including main shocks, precursory nucleation processes, afterslips, and silent earthquakes.

  10. Interevent times in a new alarm-based earthquake forecasting model

    NASA Astrophysics Data System (ADS)

    Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed

    2013-09-01

    This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the

  11. Parallelization of the Coupled Earthquake Model

    NASA Technical Reports Server (NTRS)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  12. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model

  13. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  14. Forecasting Induced Seismicity Using Saltwater Disposal Data and a Hydromechanical Earthquake Nucleation Model

    NASA Astrophysics Data System (ADS)

    Norbeck, J. H.; Rubinstein, J. L.

    2017-12-01

    The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. In this work, we demonstrate that the basement fault stressing conditions that drive seismicity rate evolution are related directly to the operational history of 958 saltwater disposal wells completed in the Arbuckle aquifer. We developed a fluid pressurization model based on the assumption that pressure changes are dominated by reservoir compressibility effects. Using injection well data, we established a detailed description of the temporal and spatial variability in stressing conditions over the 21.5-year period from January 1995 through June 2017. With this stressing history, we applied a numerical model based on rate-and-state friction theory to generate seismicity rate forecasts across a broad range of spatial scales. The model replicated the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. The behavior of the induced earthquake sequence was consistent with the prediction from rate-and-state theory that the system evolves toward a steady seismicity rate depending on the ratio between the current and background stressing rates. Seismicity rate transients occurred over characteristic timescales inversely proportional to stressing rate. We found that our hydromechanical earthquake rate model outperformed observational and empirical forecast models for one-year forecast durations over the period 2008 through 2016.

  15. Reactivity of seismicity rate to static Coulomb stress changes of two consecutive large earthquakes in the central Philippines

    NASA Astrophysics Data System (ADS)

    Dianala, J. D. B.; Aurelio, M.; Rimando, J. M.; Taguibao, K.

    2015-12-01

    In a region with little understanding in terms of active faults and seismicity, two large-magnitude reverse-fault related earthquakes occurred within 100km of each other in separate islands of the Central Philippines—the Mw=6.7 February 2012 Negros earthquake and the Mw=7.2 October 2013 Bohol earthquake. Based on source faults that were defined using onshore, offshore seismic reflection, and seismicity data, stress transfer models for both earthquakes were calculated using the software Coulomb. Coulomb stress triggering between the two main shocks is unlikely as the stress change caused by Negros earthquake on the Bohol fault was -0.03 bars. Correlating the stress changes on optimally-oriented reverse faults with seismicity rate changes shows that areas that decreased both in static stress and seismicity rate after the first earthquake were then areas with increased static stress and increased seismicity rate caused by the second earthquake. These areas with now increased stress, especially those with seismicity showing reactivity to static stress changes caused by the two earthquakes, indicate the presence of active structures in the island of Cebu. Comparing the history of instrumentally recorded seismicity and the recent large earthquakes of Negros and Bohol, these structures in Cebu have the potential to generate large earthquakes. Given that the Philippines' second largest metropolitan area (Metro Cebu) is in close proximity, detailed analysis of the earthquake potential and seismic hazards in these areas should be undertaken.

  16. M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model

    USGS Publications Warehouse

    Parsons, Thomas E.

    2006-01-01

     Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.

  17. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  18. Italian Case Studies Modelling Complex Earthquake Sources In PSHA

    NASA Astrophysics Data System (ADS)

    Gee, Robin; Peruzza, Laura; Pagani, Marco

    2017-04-01

    This study presents two examples of modelling complex seismic sources in Italy, done in the framework of regional probabilistic seismic hazard assessment (PSHA). The first case study is for an area centred around Collalto Stoccaggio, a natural gas storage facility in Northern Italy, located within a system of potentially seismogenic thrust faults in the Venetian Plain. The storage exploits a depleted natural gas reservoir located within an actively growing anticline, which is likely driven by the Montello Fault, the underlying blind thrust. This fault has been well identified by microseismic activity (M<2) detected by a local seismometric network installed in 2012 (http://rete-collalto.crs.inogs.it/). At this time, no correlation can be identified between the gas storage activity and local seismicity, so we proceed with a PSHA that considers only natural seismicity, where the rates of earthquakes are assumed to be time-independent. The source model consists of faults and distributed seismicity to consider earthquakes that cannot be associated to specific structures. All potentially active faults within 50 km of the site are considered, and are modelled as 3D listric surfaces, consistent with the proposed geometry of the Montello Fault. Slip rates are constrained using available geological, geophysical and seismological information. We explore the sensitivity of the hazard results to various parameters affected by epistemic uncertainty, such as ground motions prediction equations with different rupture-to-site distance metrics, fault geometry, and maximum magnitude. The second case is an innovative study, where we perform aftershock probabilistic seismic hazard assessment (APSHA) in Central Italy, following the Amatrice M6.1 earthquake of August 24th, 2016 (298 casualties) and the subsequent earthquakes of Oct 26th and 30th (M6.1 and M6.6 respectively, no deaths). The aftershock hazard is modelled using a fault source with complex geometry, based on literature data

  19. Tectonic controls on earthquake size distribution and seismicity rate: slab buoyancy and slab bending

    NASA Astrophysics Data System (ADS)

    Nishikawa, T.; Ide, S.

    2014-12-01

    There are clear variations in maximum earthquake magnitude among Earth's subduction zones. These variations have been studied extensively and attributed to differences in tectonic properties in subduction zones, such as relative plate velocity and subducting plate age [Ruff and Kanamori, 1980]. In addition to maximum earthquake magnitude, the seismicity of medium to large earthquakes also differs among subduction zones, such as the b-value (i.e., the slope of the earthquake size distribution) and the frequency of seismic events. However, the casual relationship between the seismicity of medium to large earthquakes and subduction zone tectonics has been unclear. Here we divide Earth's subduction zones into over 100 study regions following Ide [2013] and estimate b-values and the background seismicity rate—the frequency of seismic events excluding aftershocks—for subduction zones worldwide using the maximum likelihood method [Utsu, 1965; Aki, 1965] and the epidemic type aftershock sequence (ETAS) model [Ogata, 1988]. We demonstrate that the b-value varies as a function of subducting plate age and trench depth, and that the background seismicity rate is related to the degree of slab bending at the trench. Large earthquakes tend to occur relatively frequently (lower b-values) in shallower subduction zones with younger slabs, and more earthquakes occur in subduction zones with deeper trench and steeper dip angle. These results suggest that slab buoyancy, which depends on subducting plate age, controls the earthquake size distribution, and that intra-slab faults due to slab bending, which increase with the steepness of the slab dip angle, have influence on the frequency of seismic events, because they produce heterogeneity in plate coupling and efficiently inject fluid to elevate pore fluid pressure on the plate interface. This study reveals tectonic factors that control earthquake size distribution and seismicity rate, and these relationships between seismicity and

  20. Laboratory-based maximum slip rates in earthquake rupture zones and radiated energy

    USGS Publications Warehouse

    McGarr, A.; Fletcher, Joe B.; Boettcher, M.; Beeler, N.; Boatwright, J.

    2010-01-01

    Laboratory stick-slip friction experiments indicate that peak slip rates increase with the stresses loading the fault to cause rupture. If this applies also to earthquake fault zones, then the analysis of rupture processes is simplified inasmuch as the slip rates depend only on the local yield stress and are independent of factors specific to a particular event, including the distribution of slip in space and time. We test this hypothesis by first using it to develop an expression for radiated energy that depends primarily on the seismic moment and the maximum slip rate. From laboratory results, the maximum slip rate for any crustal earthquake, as well as various stress parameters including the yield stress, can be determined based on its seismic moment and the maximum slip within its rupture zone. After finding that our new equation for radiated energy works well for laboratory stick-slip friction experiments, we used it to estimate radiated energies for five earthquakes with magnitudes near 2 that were induced in a deep gold mine, an M 2.1 repeating earthquake near the San Andreas Fault Observatory at Depth (SAFOD) site and seven major earthquakes in California and found good agreement with energies estimated independently from spectra of local and regional ground-motion data. Estimates of yield stress for the earthquakes in our study range from 12 MPa to 122 MPa with a median of 64 MPa. The lowest value was estimated for the 2004 M 6 Parkfield, California, earthquake whereas the nearby M 2.1 repeating earthquake, as recorded in the SAFOD pilot hole, showed a more typical yield stress of 64 MPa.

  1. Forecasting induced seismicity rate and Mmax using calibrated numerical models

    NASA Astrophysics Data System (ADS)

    Dempsey, D.; Suckale, J.

    2016-12-01

    At Groningen, The Netherlands, several decades of induced seismicity from gas extraction has culminated in a M 3.6 event (mid 2012). From a public safety and commercial perspective, it is desirable to anticipate future seismicity outcomes at Groningen. One way to quantify earthquake risk is Probabilistic Seismic Hazard Analysis (PSHA), which requires an estimate of the future seismicity rate and its magnitude frequency distribution (MFD). This approach is effective at quantifying risk from tectonic events because the seismicity rate, once measured, is almost constant over timescales of interest. In contrast, rates of induced seismicity vary significantly over building lifetimes, largely in response to changes in injection or extraction. Thus, the key to extending PSHA to induced earthquakes is to estimate future changes of the seismicity rate in response to some proposed operating schedule. Numerical models can describe the physical link between fluid pressure, effective stress change, and the earthquake process (triggering and propagation). However, models with predictive potential of individual earthquakes face the difficulty of characterizing specific heterogeneity - stress, strength, roughness, etc. - at locations of interest. Modeling catalogs of earthquakes provides a means of averaging over this uncertainty, focusing instead on the collective features of the seismicity, e.g., its rate and MFD. The model we use incorporates fluid pressure and stress changes to describe nucleation and crack-like propagation of earthquakes on stochastically characterized 1D faults. This enables simulation of synthetic catalogs of induced seismicity from which the seismicity rate, location and MFD are extracted. A probability distribution for Mmax - the largest event in some specified time window - is also computed. Because the model captures the physics linking seismicity to changes in the reservoir, earthquake observations and operating information can be used to calibrate a

  2. Earthquake Clusters and Spatio-temporal Migration of earthquakes in Northeastern Tibetan Plateau: a Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Luo, G.

    2017-12-01

    Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.

  3. Earthquake Cycle Simulations with Rate-and-State Friction and Linear and Nonlinear Viscoelasticity

    NASA Astrophysics Data System (ADS)

    Allison, K. L.; Dunham, E. M.

    2016-12-01

    We have implemented a parallel code that simultaneously models both rate-and-state friction on a strike-slip fault and off-fault viscoelastic deformation throughout the earthquake cycle in 2D. Because we allow fault slip to evolve with a rate-and-state friction law and do not impose the depth of the brittle-to-ductile transition, we are able to address: the physical processes limiting the depth of large ruptures (with hazard implications); the degree of strain localization with depth; the relative partitioning of fault slip and viscous deformation in the brittle-to-ductile transition zone; and the relative contributions of afterslip and viscous flow to postseismic surface deformation. The method uses a discretization that accommodates variable off-fault material properties, depth-dependent frictional properties, and linear and nonlinear viscoelastic rheologies. All phases of the earthquake cycle are modeled, allowing the model to spontaneously generate earthquakes, and to capture afterslip and postseismic viscous flow. We compare the effects of a linear Maxwell rheology, often used in geodetic models, with those of a nonlinear power law rheology, which laboratory data indicates more accurately represents the lower crust and upper mantle. The viscosity of the Maxwell rheology is set by power law rheological parameters with an assumed a geotherm and strain rate, producing a viscosity that exponentially decays with depth and is constant in time. In contrast, the power law rheology will evolve an effective viscosity that is a function of the temperature profile and the stress state, and therefore varies both spatially and temporally. We will also integrate the energy equation for the thermomechanical problem, capturing frictional heat generation on the fault and off-fault viscous shear heating, and allowing these in turn to alter the effective viscosity.

  4. The effects of varying injection rates in Osage County, Oklahoma, on the 2016 Mw5.8 Pawnee earthquake

    USGS Publications Warehouse

    Barbour, Andrew J.; Norbeck, Jack H.; Rubinstein, Justin L.

    2017-01-01

    The 2016 Mw 5.8 Pawnee earthquake occurred in a region with active wastewater injection into a basal formation group. Prior to the earthquake, fluid injection rates at most wells were relatively steady, but newly collected data show significant increases in injection rate in the years leading up to earthquake. For the same time period, the total volumes of injected wastewater were roughly equivalent between variable‐rate and constant‐rate wells. To understand the possible influence of these changes in injection, we simulate the variable‐rate injection history and its constant‐rate equivalent in a layered poroelastic half‐space to explore the interplay between pore‐pressure effects and poroelastic effects on the fault leading up to the mainshock. In both cases, poroelastic stresses contribute a significant proportion of Coulomb failure stresses on the fault compared to pore‐pressure increases alone, but the resulting changes in seismicity rate, calculated using a rate‐and‐state frictional model, are many times larger when poroelastic effects are included, owing to enhanced stressing rates. In particular, the variable‐rate simulation predicts more than an order of magnitude increase in seismicity rate above background rates compared to the constant‐rate simulation with equivalent volume. The observed cumulative density of earthquakes prior to the mainshock within 10 km of the injection source exhibits remarkable agreement with seismicity predicted by the variable‐rate injection case.

  5. Seismic hazard assessment over time: Modelling earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting

    2017-04-01

    To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

  6. Relative Contributions of Geothermal Pumping and Long-Term Earthquake Rate to Seismicity at California Geothermal Fields

    NASA Astrophysics Data System (ADS)

    Weiser, D. A.; Jackson, D. D.

    2015-12-01

    In a tectonically active area, a definitive discrimination between geothermally-induced and tectonic earthquakes is difficult to achieve. We focus our study on California's 11 major geothermal fields: Amedee, Brawley, Casa Diablo, Coso, East Mesa, The Geysers, Heber, Litchfield, Salton Sea, Susanville, and Wendel. The Geysers geothermal field is the world's largest geothermal energy producer. California's Department of Oil Gas and Geothermal Resources provides field-wide monthly injection and production volumes for each of these sites, which allows us to study the relationship between geothermal pumping activities and seismicity. Since many of the geothermal fields began injecting and producing before nearby seismic stations were installed, we use smoothed seismicity since 1932 from the ANSS catalog as a proxy for tectonic earthquake rate. We examine both geothermal pumping and long-term earthquake rate as factors that may control earthquake rate. Rather than focusing only on the largest earthquake, which is essentially a random occurrence in time, we examine how M≥4 earthquake rate density (probability per unit area, time, and magnitude) varies for each field. We estimate relative contributions to the observed earthquake rate of M≥4 from both a long-term earthquake rate (Kagan and Jackson, 2010) and pumping activity. For each geothermal field, respective earthquake catalogs (NCEDC and SCSN) are complete above at least M3 during the test period (which we tailor to each site). We test the hypothesis that the observed earthquake rate at a geothermal site during the test period is a linear combination of the long-term seismicity and pumping rates. We use a grid search to determine the confidence interval of the weighting parameters.

  7. Changes in crustal seismic deformation rates associated with the 1964 Great Alaska earthquake

    USGS Publications Warehouse

    Doser, D.I.; Ratchkovski, N.A.; Haeussler, Peter J.; Saltus, R.

    2004-01-01

    We calculated seismic moment rates from crustal earthquake information for the upper Cook Inlet region, including Anchorage, Alaska, for the 30 yr prior to and 36 yr following the 1964 Great Alaska earthquake. Our results suggest over a factor of 1000 decrease in seismic moment rate (in units of dyne centimeters per year) following the 1964 mainshock. We used geologic information on structures within the Cook Inlet basin to estimate a regional geologic moment rate, assuming the structures extend to 30 km depth and have near-vertical dips. The geologic moment rates could underestimate the true rates by up to 70% since it is difficult determine the amount of horizontal offset that has occurred along many structures within the basin. Nevertheless, the geologic moment rate is only 3-7 times lower than the pre-1964 seismic moment rate, suggesting the 1964 mainshock has significantly slowed regional crustal deformation. If we compare the geologic moment rate to the post-1964 seismic moment rate, the moment rate deficit over the past 36 yr is equivalent to a moment magnitude 6.6-7.0 earthquake. These observed differences in moment rates highlight the difficulty in using seismicity in the decades following a large megathrust earthquake to adequately characterize long-term crustal deformation.

  8. Viscoelastic shear zone model of a strike-slip earthquake cycle

    USGS Publications Warehouse

    Pollitz, F.F.

    2001-01-01

    I examine the behavior of a two-dimensional (2-D) strike-slip fault system embedded in a 1-D elastic layer (schizosphere) overlying a uniform viscoelastic half-space (plastosphere) and within the boundaries of a finite width shear zone. The viscoelastic coupling model of Savage and Prescott [1978] considers the viscoelastic response of this system, in the absence of the shear zone boundaries, to an earthquake occurring within the upper elastic layer, steady slip beneath a prescribed depth, and the superposition of the responses of multiple earthquakes with characteristic slip occurring at regular intervals. So formulated, the viscoelastic coupling model predicts that sufficiently long after initiation of the system, (1) average fault-parallel velocity at any point is the average slip rate of that side of the fault and (2) far-field velocities equal the same constant rate. Because of the sensitivity to the mechanical properties of the schizosphere-plastosphere system (i.e., elastic layer thickness, plastosphere viscosity), this model has been used to infer such properties from measurements of interseismic velocity. Such inferences exploit the predicted behavior at a known time within the earthquake cycle. By modifying the viscoelastic coupling model to satisfy the additional constraint that the absolute velocity at prescribed shear zone boundaries is constant, I find that even though the time-averaged behavior remains the same, the spatiotemporal pattern of surface deformation (particularly its temporal variation within an earthquake cycle) is markedly different from that predicted by the conventional viscoelastic coupling model. These differences are magnified as plastosphere viscosity is reduced or as the recurrence interval of periodic earthquakes is lengthened. Application to the interseismic velocity field along the Mojave section of the San Andreas fault suggests that the region behaves mechanically like a ???600-km-wide shear zone accommodating 50 mm/yr fault

  9. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  10. Modeling fast and slow earthquakes at various scales.

    PubMed

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  11. Implementation into earthquake sequence simulations of a rate- and state-dependent friction law incorporating pressure solution creep

    NASA Astrophysics Data System (ADS)

    Noda, H.

    2016-05-01

    Pressure solution creep (PSC) is an important elementary process in rock friction at high temperatures where solubilities of rock-forming minerals are significantly large. It significantly changes the frictional resistance and enhances time-dependent strengthening. A recent microphysical model for PSC-involved friction of clay-quartz mixtures, which can explain a transition between dilatant and non-dilatant deformation (d-nd transition), was modified here and implemented in dynamic earthquake sequence simulations. The original model resulted in essentially a kind of rate- and state-dependent friction (RSF) law, but assumed a constant friction coefficient for clay resulting in zero instantaneous rate dependency in the dilatant regime. In this study, an instantaneous rate dependency for the clay friction coefficient was introduced, consistent with experiments, resulting in a friction law suitable for earthquake sequence simulations. In addition, a term for time-dependent strengthening due to PSC was added which makes the friction law logarithmically rate-weakening in the dilatant regime. The width of the zone in which clasts overlap or, equivalently, the interface porosity involved in PSC plays a role as the state variable. Such a concrete physical meaning of the state variable is a great advantage in future modelling studies incorporating other physical processes such as hydraulic effects. Earthquake sequence simulations with different pore pressure distributions demonstrated that excess pore pressure at depth causes deeper rupture propagation with smaller slip per event and a shorter recurrence interval. The simulated ruptures were arrested a few kilometres below the point of pre-seismic peak stress at the d-nd transition and did not propagate spontaneously into the region of pre-seismic non-dilatant deformation. PSC weakens the fault against slow deformation and thus such a region cannot produce a dynamic stress drop. Dynamic rupture propagation further down to

  12. Forecast model for great earthquakes at the Nankai Trough subduction zone

    USGS Publications Warehouse

    Stuart, W.D.

    1988-01-01

    An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.

  13. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  14. Absence of earthquake correlation with Earth tides: An indication of high preseismic fault stress rate

    USGS Publications Warehouse

    Vidale, J.E.; Agnew, D.C.; Johnston, M.J.S.; Oppenheimer, D.H.

    1998-01-01

    Because the rate of stress change from the Earth tides exceeds that from tectonic stress accumulation, tidal triggering of earthquakes would be expected if the final hours of loading of the fault were at the tectonic rate and if rupture began soon after the achievement of a critical stress level. We analyze the tidal stresses and stress rates on the fault planes and at the times of 13,042 earthquakes which are so close to the San Andreas and Calaveras faults in California that we may take the fault plane to be known. We find that the stresses and stress rates from Earth tides at the times of earthquakes are distributed in the same way as tidal stresses and stress rates at random times. While the rate of earthquakes when the tidal stress promotes failure is 2% higher than when the stress does not, this difference in rate is not statistically significant. This lack of tidal triggering implies that preseismic stress rates in the nucleation zones of earthquakes are at least 0.15 bar/h just preceding seismic failure, much above the long-term tectonic stress rate of 10-4 bar/h.

  15. Mechanisms of postseismic relaxation after a great subduction earthquake constrained by cross-scale thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.

  16. A combined geodetic and seismic model for the Mw 8.3 2015 Illapel (Chile) earthquake

    NASA Astrophysics Data System (ADS)

    Simons, M.; Duputel, Z.; Jiang, J.; Liang, C.; Fielding, E. J.; Agram, P. S.; Owen, S. E.; Moore, A. W.; Kanamori, H.; Rivera, L. A.; Riel, B. V.; Ortega, F.

    2015-12-01

    The 2015 September 16 Mw 8.3 Illapel earthquake occurred on the subduction megathrust offshore of the Chilean coastline between the towns of Valparaiso and Coquimbo. This earthquake is the 3rdevent with Mw>8 to impact coastal Chile in the last 6 years. It occurred north of both the 2010 Mw 8.8 Maule earthquake and the 1985 Mw 8.0 Valparaiso earthquake. While the location of the 2015 earthquake is close to the inferred location of a large earthquake in 1943, comparison of seismograms from the two earthquakes suggests the recent event is not clearly a repeat of the 1943 event. To gain a better understanding of the distribution of coseismic fault slip, we develop a finite fault model that is constrained by a combination of static GPS offsets, Sentinel 1a ascending and descending radar interferograms, tsunami waveform measurements made at selected DART buoys, high rate (1 sample/sec) GPS waveforms and strong motion seismic data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the assumed forward models. At the inherent resolution of the model, the posterior ensemble of purely static models (without using high rate GPS and strong motion data) is characterized by a distribution of slip that reaches as much as 10 m in localized regions, with significant slip apparently reaching the trench or at least very close to the trench. Based on our W-phase point-source estimate, the event duration is approximately 1.7 minutes. We also present a joint kinematic model and describe the relationship of the coseismic model to the spatial distribution of aftershocks and post-seismic slip.

  17. Rate/state Coulomb stress transfer model for the CSEP Japan seismicity forecast

    NASA Astrophysics Data System (ADS)

    Toda, Shinji; Enescu, Bogdan

    2011-03-01

    Numerous studies retrospectively found that seismicity rate jumps (drops) by coseismic Coulomb stress increase (decrease). The Collaboratory for the Study of Earthquake Prediction (CSEP) instead provides us an opportunity for prospective testing of the Coulomb hypothesis. Here we adapt our stress transfer model incorporating rate and state dependent friction law to the CSEP Japan seismicity forecast. We demonstrate how to compute the forecast rates of large shocks in 2009 using the large earthquakes during the past 120 years. The time dependent impact of the coseismic stress perturbations explains qualitatively well the occurrence of the recent moderate size shocks. Such ability is partly similar to that of statistical earthquake clustering models. However, our model differs from them as follows: the off-fault aftershock zones can be simulated using finite fault sources; the regional areal patterns of triggered seismicity are modified by the dominant mechanisms of the potential sources; the imparted stresses due to large earthquakes produce stress shadows that lead to a reduction of the forecasted number of earthquakes. Although the model relies on several unknown parameters, it is the first physics based model submitted to the CSEP Japan test center and has the potential to be tuned for short-term earthquake forecasts.

  18. The Mw 7.7 Bhuj earthquake: Global lessons for earthquake hazard in intra-plate regions

    USGS Publications Warehouse

    Schweig, E.; Gomberg, J.; Petersen, M.; Ellis, M.; Bodin, P.; Mayrose, L.; Rastogi, B.K.

    2003-01-01

    The Mw 7.7 Bhuj earthquake occurred in the Kachchh District of the State of Gujarat, India on 26 January 2001, and was one of the most damaging intraplate earthquakes ever recorded. This earthquake is in many ways similar to the three great New Madrid earthquakes that occurred in the central United States in 1811-1812, An Indo-US team is studying the similarities and differences of these sequences in order to learn lessons for earthquake hazard in intraplate regions. Herein we present some preliminary conclusions from that study. Both the Kutch and New Madrid regions have rift type geotectonic setting. In both regions the strain rates are of the order of 10-9/yr and attenuation of seismic waves as inferred from observations of intensity and liquefaction are low. These strain rates predict recurrence intervals for Bhuj or New Madrid sized earthquakes of several thousand years or more. In contrast, intervals estimated from paleoseismic studies and from other independent data are significantly shorter, probably hundreds of years. All these observations together may suggest that earthquakes relax high ambient stresses that are locally concentrated by rheologic heterogeneities, rather than loading by plate-tectonic forces. The latter model generally underlies basic assumptions made in earthquake hazard assessment, that the long-term average rate of energy released by earthquakes is determined by the tectonic loading rate, which thus implies an inherent average periodicity of earthquake occurrence. Interpreting the observations in terms of the former model therefore may require re-examining the basic assumptions of hazard assessment.

  19. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  20. Effect of data quality on a hybrid Coulomb/STEP model for earthquake forecasting

    NASA Astrophysics Data System (ADS)

    Steacy, Sandy; Jimenez, Abigail; Gerstenberger, Matt; Christophersen, Annemarie

    2014-05-01

    Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip. Specifically, we consider slip models based on the NEIC location, the CMT solution, surface rupture, and published inversions and find significant variation in the relative performance of the models depending upon the input data.

  1. Modeling Seismic Cycles of Great Megathrust Earthquakes Across the Scales With Focus at Postseismic Phase

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan V.; Muldashev, Iskander A.

    2017-12-01

    Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.

  2. Simple Physical Model for the Probability of a Subduction- Zone Earthquake Following Slow Slip Events and Earthquakes: Application to the Hikurangi Megathrust, New Zealand

    NASA Astrophysics Data System (ADS)

    Kaneko, Yoshihiro; Wallace, Laura M.; Hamling, Ian J.; Gerstenberger, Matthew C.

    2018-05-01

    Slow slip events (SSEs) have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. Here we develop a relatively simple, simulation-based method for estimating the probability of megathrust earthquakes following tectonic events that induce any transient stress perturbations. This method has been applied to the locked Hikurangi megathrust (New Zealand) surrounded on all sides by the 2016 Kaikoura earthquake and SSEs. Our models indicate the annual probability of a M≥7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3-18 times relative to the pre-Kaikoura probability, and the absolute probability is in the range of 0.6-7%. We find that probabilities of a large earthquake are mainly controlled by the ratio of the total stressing rate induced by all nearby tectonic sources to the mean stress drop of earthquakes. Our method can be applied to evaluate the potential for triggering a megathrust earthquake following SSEs in other subduction zones.

  3. Application of a long-range forecasting model to earthquakes in the Japan mainland testing region

    NASA Astrophysics Data System (ADS)

    Rhoades, David A.

    2011-03-01

    The Every Earthquake a Precursor According to Scale (EEPAS) model is a long-range forecasting method which has been previously applied to a number of regions, including Japan. The Collaboratory for the Study of Earthquake Predictability (CSEP) forecasting experiment in Japan provides an opportunity to test the model at lower magnitudes than previously and to compare it with other competing models. The model sums contributions to the rate density from past earthquakes based on predictive scaling relations derived from the precursory scale increase phenomenon. Two features of the earthquake catalogue in the Japan mainland region create difficulties in applying the model, namely magnitude-dependence in the proportion of aftershocks and in the Gutenberg-Richter b-value. To accommodate these features, the model was fitted separately to earthquakes in three different target magnitude classes over the period 2000-2009. There are some substantial unexplained differences in parameters between classes, but the time and magnitude distributions of the individual earthquake contributions are such that the model is suitable for three-month testing at M ≥ 4 and for one-year testing at M ≥ 5. In retrospective analyses, the mean probability gain of the EEPAS model over a spatially smoothed seismicity model increases with magnitude. The same trend is expected in prospective testing. The Proximity to Past Earthquakes (PPE) model has been submitted to the same testing classes as the EEPAS model. Its role is that of a spatially-smoothed reference model, against which the performance of time-varying models can be compared.

  4. Testing hypotheses of earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.

    2003-12-01

    We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of

  5. Short- and Long-Term Earthquake Forecasts Based on Statistical Models

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Taroni, Matteo; Murru, Maura; Falcone, Giuseppe; Marzocchi, Warner

    2017-04-01

    The epidemic-type aftershock sequences (ETAS) models have been experimentally used to forecast the space-time earthquake occurrence rate during the sequence that followed the 2009 L'Aquila earthquake and for the 2012 Emilia earthquake sequence. These forecasts represented the two first pioneering attempts to check the feasibility of providing operational earthquake forecasting (OEF) in Italy. After the 2009 L'Aquila earthquake the Italian Department of Civil Protection nominated an International Commission on Earthquake Forecasting (ICEF) for the development of the first official OEF in Italy that was implemented for testing purposes by the newly established "Centro di Pericolosità Sismica" (CPS, the seismic Hazard Center) at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). According to the ICEF guidelines, the system is open, transparent, reproducible and testable. The scientific information delivered by OEF-Italy is shaped in different formats according to the interested stakeholders, such as scientists, national and regional authorities, and the general public. The communication to people is certainly the most challenging issue, and careful pilot tests are necessary to check the effectiveness of the communication strategy, before opening the information to the public. With regard to long-term time-dependent earthquake forecast, the application of a newly developed simulation algorithm to Calabria region provided typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range.

  6. Promise and problems in using stress triggering models for time-dependent earthquake hazard assessment

    NASA Astrophysics Data System (ADS)

    Cocco, M.

    2001-12-01

    Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to

  7. Fatality rates of the M w ~8.2, 1934, Bihar-Nepal earthquake and comparison with the April 2015 Gorkha earthquake

    NASA Astrophysics Data System (ADS)

    Sapkota, Soma Nath; Bollinger, Laurent; Perrier, Frédéric

    2016-03-01

    Large Himalayan earthquakes expose rapidly growing populations of millions of people to high levels of seismic hazards, in particular in northeast India and Nepal. Calibrating vulnerability models specific to this region of the world is therefore crucial to the development of reliable mitigation measures. Here, we reevaluate the >15,700 casualties (8500 in Nepal and 7200 in India) from the M w ~8.2, 1934, Bihar-Nepal earthquake and calculate the fatality rates for this earthquake using an estimation of the population derived from two census held in 1921 and 1942. Values reach 0.7-1 % in the epicentral region, located in eastern Nepal, and 2-5 % in the urban areas of the Kathmandu valley. Assuming a constant vulnerability, we obtain, if the same earthquake would have repeated in 2011, fatalities of 33,000 in Nepal and 50,000 in India. Fast-growing population in India indeed must unavoidably lead to increased levels of casualty compared with Nepal, where the population growth is smaller. Aside from that probably robust fact, extrapolations have to be taken with great caution. Among other effects, building and life vulnerability could depend on population concentration and evolution of construction methods. Indeed, fatalities of the April 25, 2015, M w 7.8 Gorkha earthquake indicated on average a reduction in building vulnerability in urban areas, while rural areas remained highly vulnerable. While effective scaling laws, function of the building stock, seem to describe these differences adequately, vulnerability in the case of an M w >8.2 earthquake remains largely unknown. Further research should be carried out urgently so that better prevention strategies can be implemented and building codes reevaluated on, adequately combining detailed ancient and modern data.

  8. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    PubMed

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  9. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California

    PubMed Central

    Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.

    2011-01-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355

  10. Insight into the rupture process of a rare tsunami earthquake from near-field high-rate GPS

    NASA Astrophysics Data System (ADS)

    Macpherson, K. A.; Hill, E. M.; Elosegui, P.; Banerjee, P.; Sieh, K. E.

    2011-12-01

    We investigated the rupture duration and velocity of the October 25, 2010 Mentawai earthquake by examining high-rate GPS displacement data. This Mw=7.8 earthquake appears to have ruptured either an up-dip part of the Sumatran megathrust or a fore-arc splay fault, and produced tsunami run-ups on nearby islands that were out of proportion with its magnitude. It has been described as a so-called "slow tsunami earthquake", characterised by a dearth of high-frequency signal and long rupture duration in low-strength, near-surface media. The event was recorded by the Sumatran GPS Array (SuGAr), a network of high-rate (1 sec) GPS sensors located on the nearby islands of the Sumatran fore-arc. For this study, the 1 sec time series from 8 SuGAr stations were selected for analysis due to their proximity to the source and high-quality recordings of both static displacements and dynamic waveforms induced by surface waves. The stations are located at epicentral distances of between 50 and 210 km, providing a unique opportunity to observe the dynamic source processes of a tsunami earthquake from near-source, high-rate GPS. We estimated the rupture duration and velocity by simulating the rupture using the spectral finite-element method SPECFEM and comparing the synthetic time series to the observed surface waves. A slip model from a previous study, derived from the inversion of GPS static offsets and tsunami data, and the CRUST2.0 3D velocity model were used as inputs for the simulations. Rupture duration and velocity were varied for a suite of simulations in order to determine the parameters that produce the best-fitting waveforms.

  11. Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data

    NASA Astrophysics Data System (ADS)

    Funning, G. J.; Cockett, R.

    2012-12-01

    InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median

  12. Thermomechanical earthquake cycle simulations with rate-and-state friction and nonlinear viscoelasticity

    NASA Astrophysics Data System (ADS)

    Allison, K. L.; Dunham, E. M.

    2017-12-01

    We simulate earthquake cycles on a 2D strike-slip fault, modeling both rate-and-state fault friction and an off-fault nonlinear power-law rheology. The power-law rheology involves an effective viscosity that is a function of temperature and stress, and therefore varies both spatially and temporally. All phases of the earthquake cycle are simulated, allowing the model to spontaneously generate earthquakes, and to capture frictional afterslip and postseismic and interseismic viscous flow. We investigate the interaction between fault slip and bulk viscous flow, using experimentally-based flow laws for quartz-diorite in the crust and olivine in the mantle, representative of the Mojave Desert region in Southern California. We first consider a suite of three linear geotherms which are constant in time, with dT/dz = 20, 25, and 30 K/km. Though the simulations produce very different deformation styles in the lower crust, ranging from significant interseismc fault creep to purely bulk viscous flow, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. This indicates that bulk viscous flow and interseismic fault creep load the brittle crust similarly. The simulations also predict unrealistically high stresses in the upper crust, resulting from the fact that the lower crust and upper mantle are relatively weak far from the fault, and from the relatively small role that basal tractions on the base of the crust play in the force balance of the lithosphere. We also find that for the warmest model, the effective viscosity varies by an order of magnitude in the interseismic period, whereas for the cooler models it remains roughly constant. Because the rheology is highly sensitive to changes in temperature, in addition to the simulations with constant temperature we also consider the effect of heat generation. We capture both frictional heat generation and off-fault viscous shear heating, allowing these in turn to alter the

  13. Is there a basis for preferring characteristic earthquakes over a Gutenberg–Richter distribution in probabilistic earthquake forecasting?

    USGS Publications Warehouse

    Parsons, Thomas E.; Geist, Eric L.

    2009-01-01

    The idea that faults rupture in repeated, characteristic earthquakes is central to most probabilistic earthquake forecasts. The concept is elegant in its simplicity, and if the same event has repeated itself multiple times in the past, we might anticipate the next. In practice however, assembling a fault-segmented characteristic earthquake rupture model can grow into a complex task laden with unquantified uncertainty. We weigh the evidence that supports characteristic earthquakes against a potentially simpler model made from extrapolation of a Gutenberg–Richter magnitude-frequency law to individual fault zones. We find that the Gutenberg–Richter model satisfies key data constraints used for earthquake forecasting equally well as a characteristic model. Therefore, judicious use of instrumental and historical earthquake catalogs enables large-earthquake-rate calculations with quantifiable uncertainty that should get at least equal weighting in probabilistic forecasting.

  14. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  15. Earthquake Rate Model 2.2 of the 2007 Working Group for California Earthquake Probabilities, Appendix D: Magnitude-Area Relationships

    USGS Publications Warehouse

    Stein, Ross S.

    2007-01-01

    Summary To estimate the down-dip coseismic fault dimension, W, the Executive Committee has chosen the Nazareth and Hauksson (2004) method, which uses the 99% depth of background seismicity to assign W. For the predicted earthquake magnitude-fault area scaling used to estimate the maximum magnitude of an earthquake rupture from a fault's length, L, and W, the Committee has assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2002) (as updated in 2007) equations. The former uses a single relation; the latter uses a bilinear relation which changes slope at M=6.65 (A=537 km2).

  16. New constraints on the late Quaternary slip rate and earthquake history of the Kalabagh fault from geomorphic mapping: Implications for slip rate and earthquake potential of the western Salt Range thrust

    NASA Astrophysics Data System (ADS)

    Madugo, C. M.; Meigs, A.; Ramzan, S.

    2013-12-01

    and fissuring in natural exposures of the KF in the walls of alluvial stream cuts. An OSL age of 6×1 ka for a sand layer that fills fault tip fissures and is cut by other fault strands indicates that the KF has experienced multiple mid-late Holocene surface ruptures. Our results favor the model where the KF and SRT are linked and that the SRT ruptures during large earthquakes, similar to behavior of the thrust front in the central Himalaya. An outstanding question not reconciled by these data is why existing GPS data are markedly lower than intermediate and long-term slip rates. One potential way to reconcile the low geodetic with the high geologic rates is to interpret the 3 mm/yr geodetic velocity as the creep rate, which implies that the ~11 mm/yr discrepancy represents the loading rate of the Main Himalaya thrust (the décollement to the north of the evaporate deposits) which is relieved in large infrequent earthquakes.

  17. Geodetic Imaging of the Earthquake Cycle

    NASA Astrophysics Data System (ADS)

    Tong, Xiaopeng

    In this dissertation I used Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) to recover crustal deformation caused by earthquake cycle processes. The studied areas span three different types of tectonic boundaries: a continental thrust earthquake (M7.9 Wenchuan, China) at the eastern margin of the Tibet plateau, a mega-thrust earthquake (M8.8 Maule, Chile) at the Chile subduction zone, and the interseismic deformation of the San Andreas Fault System (SAFS). A new L-band radar onboard a Japanese satellite ALOS allows us to image high-resolution surface deformation in vegetated areas, which is not possible with older C-band radar systems. In particular, both the Wenchuan and Maule InSAR analyses involved L-band ScanSAR interferometry which had not been attempted before. I integrated a large InSAR dataset with dense GPS networks over the entire SAFS. The integration approach features combining the long-wavelength deformation from GPS with the short-wavelength deformation from InSAR through a physical model. The recovered fine-scale surface deformation leads us to better understand the underlying earthquake cycle processes. The geodetic slip inversion reveals that the fault slip of the Wenchuan earthquake is maximum near the surface and decreases with depth. The coseismic slip model of the Maule earthquake constrains the down-dip extent of the fault slip to be at 45 km depth, similar to the Moho depth. I inverted for the slip rate on 51 major faults of the SAFS using Green's functions for a 3-dimensional earthquake cycle model that includes kinematically prescribed slip events for the past earthquakes since the year 1000. A 60 km thick plate model with effective viscosity of 10 19 Pa · s is preferred based on the geodetic and geological observations. The slip rates recovered from the plate models are compared to the half-space model. The InSAR observation reveals that the creeping section of the SAFS is partially locked. This high

  18. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    USGS Publications Warehouse

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  19. Stress Field Variation after the 2001 Skyros Earthquake, Greece, Derived from Seismicity Rate Changes

    NASA Astrophysics Data System (ADS)

    Leptokaropoulos, K.; Papadimitriou, E.; Orlecka-Sikora, B.; Karakostas, V.

    2012-04-01

    The spatial variation of the stress field (ΔCFF) after the 2001 strong (Mw=6.4) Skyros earthquake in North Aegean Sea, Greece, is investigated in association with the changes of earthquake production rates. A detailed slip model is considered in which the causative fault is consisted of several sub-faults with different coseismic slip onto each one of them. First the spatial distribution of aftershock productivity is compared with the static stress changes due to the coseismic slip. Calculations of ΔCFF are performed at different depths inside the seismogenic layer, defined from the vertical distribution of the aftershocks. Seismicity rates of the smaller magnitude events with M≥Mc for different time increments before and after the main shock are then derived from the application of a Probability Density Function (PDF). These rates are computed by spatially smoothing the seismicity and for this purpose a normal grid of rectangular cells is superimposed onto the area and the PDF determines seismicity rate values at the center of each cell. The differences between the earthquake occurrence rates before and after the main shock are compared and used as input data in a stress inversion algorithm based upon the Rate/State dependent friction concept in order to provide an independent estimation of stress changes. This model incorporates the physical properties of the fault zones (characteristic relaxation time, fault constitutive parameters, effective friction coefficient) with a probabilistic estimation of the spatial distribution of seismicity rates, derived from the application of the PDF. The stress patterns derived from the previously mentioned approaches are compared and the quantitative correlation between the respective results is accomplished by the evaluation of Pearson linear correlation coefficient and its confidence intervals to quantify their significance. Different assumptions and combinations of the physical and statistical parameters are tested for

  20. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  1. Simulate earthquake cycles on the oceanic transform faults in the framework of rate-and-state friction

    NASA Astrophysics Data System (ADS)

    Wei, M.

    2016-12-01

    Progress towards a quantitative and predictive understanding of the earthquake behavior can be achieved by improved understanding of earthquake cycles. However, it is hindered by the long repeat times (100s to 1000s of years) of the largest earthquakes on most faults. At fast-spreading oceanic transform faults, the typical repeating time ranges from 5-20 years, making them a unique tectonic environment for studying the earthquake cycle. One important observation on OTFs is the quasi-periodicity and the spatial-temporal clustering of large earthquakes: same fault segment ruptured repeatedly at a near constant interval and nearby segments ruptured during a short time period. This has been observed on the Gofar and Discovery faults in the East Pacific Rise. Between 1992 and 2014, five clusters of M6 earthquakes occurred on the Gofar and Discovery fault system with recurrence intervals of 4-6 years. Each cluster consisted of a westward migration of seismicity from the Discovery to Gofar segment within a 2-year period, providing strong evidence for spatial-temporal clustering of large OTFs earthquakes. I simulated earthquake cycles of oceanic transform fault in the framework of rate-and-state friction, motivated by the observations at the Gofar and Discovery faults. I focus on a model with two seismic segments, each 20 km long and 5 km wide, separated by an aseismic segment of 10 km wide. This geometry is set based on aftershock locations of the 2008 M6.0 earthquake on Gofar. The repeating large earthquake on both segments are reproduced with similar magnitude as observed. I set the state parameter differently for the two seismic segments so initially they are not synchornized. Results also show that synchronization of the two seismic patches can be achieved after several earthquake cycles when the effective normal stress or the a-b parameter is smaller than surrounding aseismic areas, both having reduced the resistance to seismic rupture in the VS segment. These

  2. Toward a comprehensive areal model of earthquake-induced landslides

    USGS Publications Warehouse

    Miles, S.B.; Keefer, D.K.

    2009-01-01

    This paper provides a review of regional-scale modeling of earthquake-induced landslide hazard with respect to the needs for disaster risk reduction and sustainable development. Based on this review, it sets out important research themes and suggests computing with words (CW), a methodology that includes fuzzy logic systems, as a fruitful modeling methodology for addressing many of these research themes. A range of research, reviewed here, has been conducted applying CW to various aspects of earthquake-induced landslide hazard zonation, but none facilitate comprehensive modeling of all types of earthquake-induced landslides. A new comprehensive areal model of earthquake-induced landslides (CAMEL) is introduced here that was developed using fuzzy logic systems. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL is highly modifiable and adaptable; new knowledge can be easily added, while existing knowledge can be changed to better match local knowledge and conditions. As such, CAMEL should not be viewed as a complete alternative to other earthquake-induced landslide models. CAMEL provides an open framework for incorporating other models, such as Newmark's displacement method, together with previously incompatible empirical and local knowledge. ?? 2009 ASCE.

  3. Maximum earthquake magnitudes in the Aegean area constrained by tectonic moment release rates

    NASA Astrophysics Data System (ADS)

    Ch. Koravos, G.; Main, I. G.; Tsapanos, T. M.; Musson, R. M. W.

    2003-01-01

    Seismic moment release is usually dominated by the largest but rarest events, making the estimation of seismic hazard inherently uncertain. This uncertainty can be reduced by combining long-term tectonic deformation rates with short-term recurrence rates. Here we adopt this strategy to estimate recurrence rates and maximum magnitudes for tectonic zones in the Aegean area. We first form a merged catalogue for historical and instrumentally recorded earthquakes in the Aegean, based on a recently published catalogue for Greece and surrounding areas covering the time period 550BC-2000AD, at varying degrees of completeness. The historical data are recalibrated to allow for changes in damping in seismic instruments around 1911. We divide the area up into zones that correspond to recent determinations of deformation rate from satellite data. In all zones we find that the Gutenberg-Richter (GR) law holds at low magnitudes. We use Akaike's information criterion to determine the best-fitting distribution at high magnitudes, and classify the resulting frequency-magnitude distributions of the zones as critical (GR law), subcritical (gamma density distribution) or supercritical (`characteristic' earthquake model) where appropriate. We determine the ratio η of seismic to tectonic moment release rate. Low values of η (<0.5) corresponding to relatively aseismic deformation, are associated with higher b values (>1.0). The seismic and tectonic moment release rates are then combined to constrain recurrence rates and maximum credible magnitudes (in the range 6.7-7.6 mW where the results are well constrained) based on extrapolating the short-term seismic data. With current earthquake data, many of the tectonic zones show a characteristic distribution that leads to an elevated probability of magnitudes around 7, but a reduced probability of larger magnitudes above this value when compared with the GR trend. A modification of the generalized gamma distribution is suggested to account

  4. Gradual decay of elevated landslide rates after a large earthquake in the Finisterre Mountains, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Hovius, N.; Marc, O.

    2013-12-01

    Large earthquakes can cause widespread mass wasting and landslide rates can stay high after a seismic event. The rate of decay of seismically enhanced mass wasting determines the total erosional effect of an earthquake. It is also an important term in the post-seismic redevelopment of epicentral areas. Using a time series of Landsat images spanning 1990-2010, we have determined the evolution of landslide rates in the western Finisterre Mountains, Papua New Guinea. There, two earthquakes with Mw 6.7and 6.9 occurred at depth of about 20 km on the range-bounding Ramu-Markam fault in 1993. These earthquakes triggered landslides with a total volume of about 0.15 km3. Landslide rates were up to four orders of magnitude higher after the earthquakes than in preceding years, decaying to background values over a period of 2-3 years. Due to this short decay time, seismically induced landslides added only 5% to the volume of co-seismic landslides. This contrasts with another well-documented example, the 1999 Chi-Chi earthquake in Taiwan, where post-seismic landsliding may have increased the total eroded volume by a factor 3-5. In the Finisterre case, landslide rates may have been slightly less than normal for up to a decade after the decay period, but this effect is partially obscured by the impact of a smaller earthquake in 1997. Regardless, the rate of decay of landslide incidence was unrelated to both the seismic moment release in aftershocks and local precipitation. A control on this decay rate has not yet been identified.

  5. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    USGS Publications Warehouse

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  6. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  7. FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.

    USGS Publications Warehouse

    Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.

    1985-01-01

    The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.

  8. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  9. Comparing the stress change characteristics and aftershock decay rate of the 2011 Mineral, VA, earthquake with similar earthquakes from a variety of tectonic settings

    NASA Astrophysics Data System (ADS)

    Walsh, L. S.; Montesi, L. G.; Sauber, J. M.; Watters, T. R.; Kim, W.; Martin, A. J.; Anderson, R.

    2011-12-01

    On August 23, 2011, the magnitude 5.8 Mineral, VA, earthquake rocked the U.S. national capital region (Washington, DC) drawing worldwide attention to the occurrence of intraplate earthquakes. Using regional Coulomb stress change, we evaluate to what extent slip on faults during the Mineral, VA, earthquake and its aftershocks may have increased stress on notable Cenozoic fault systems in the DC metropolitan area: the central Virginia seismic zone, the DC fault zone, and the Stafford fault system. Our Coulomb stress maps indicate that the transfer of stress from the Mineral, VA, mainshock was at least 500 times greater than that produced from the magnitude 3.4 Germantown, MD, earthquake that occurred northwest of DC on July 16, 2010. Overall, the Mineral, VA, earthquake appears to have loaded faults of optimum orientation in the DC metropolitan region, bringing them closer to failure. The distribution of aftershocks of the Mineral, VA, earthquake will be compared with Coulomb stress change maps. We further characterize the Mineral, VA, earthquake by comparing its aftershock decay rate with that of blind thrust earthquakes with similar magnitude, focal mechanism, and depth from a variety of tectonic settings. In particular, we compare aftershock decay relations of the Mineral, VA, earthquake with two well studied California reverse faulting events, the August 4, 1985 Kettleman Hills (Mw = 6.1) and October 1, 1987 Whittier Narrow (Mw = 5.9) earthquakes. Through these relations we test the hypothesis that aftershock duration is inversely proportional to fault stressing rate, suggesting that aftershocks in active tectonic margins may last only a few years while aftershocks in intraplate regions could endure for decades to a century.

  10. How Long Is Long Enough? Estimation of Slip-Rate and Earthquake Recurrence Interval on a Simple Plate-Boundary Fault Using 3D Paleoseismic Trenching

    NASA Astrophysics Data System (ADS)

    Wechsler, N.; Rockwell, T. K.; Klinger, Y.; Agnon, A.; Marco, S.

    2012-12-01

    Models used to forecast future seismicity make fundamental assumptions about the behavior of faults and fault systems in the long term, but in many cases this long-term behavior is assumed using short-term and perhaps non-representative observations. The question arises - how long of a record is long enough to represent actual fault behavior, both in terms of recurrence of earthquakes and of moment release (aka slip-rate). We test earthquake recurrence and slip models via high-resolution three-dimensional trenching of the Beteiha (Bet-Zayda) site on the Dead Sea Transform (DST) in northern Israel. We extend the earthquake history of this simple plate boundary fault to establish slip rate for the past 3-4kyr, to determine the amount of slip per event and to study the fundamental behavior, thereby testing competing rupture models (characteristic, slip-patch, slip-loading, and Gutenberg Richter type distribution). To this end we opened more than 900m of trenches, mapped 8 buried channels and dated more than 80 radiocarbon samples. By mapping buried channels, offset by the DST on both sides of the fault, we obtained for each an estimate of displacement. Coupled with fault crossing trenches to determine event history, we construct earthquake and slip history for the fault for the past 2kyr. We observe evidence for a total of 9-10 surface-rupturing earthquakes with varying offset amounts. 6-7 events occurred in the 1st millennium, compared to just 2-3 in the 2nd millennium CE. From our observations it is clear that the fault is not behaving in a periodic fashion. A 4kyr old buried channel yields a slip rate of 3.5-4mm/yr, consistent with GPS rates for this segment. Yet in spite of the apparent agreement between GPS, Pleistocene to present slip rate, and the lifetime rate of the DST, the past 800-1000 year period appears deficit in strain release. Thus, in terms of moment release, most of the fault has remained locked and is accumulating elastic strain. In contrast, the

  11. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  12. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  13. Sensitivity of Coulomb stress changes to slip models of source faults: A case study for the 2011 Mw 9.0 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.

    2017-12-01

    Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.

  14. Retardations in fault creep rates before local moderate earthquakes along the San Andreas fault system, central California

    USGS Publications Warehouse

    Burford, R.O.

    1988-01-01

    Records of shallow aseismic slip (fault creep) obtained along parts of the San Andreas and Calaveras faults in central California demonstrate that significant changes in creep rates often have been associated with local moderate earthquakes. An immediate postearthquake increase followed by gradual, long-term decay back to a previous background rate is generally the most obvious earthquake effect on fault creep. This phenomenon, identified as aseismic afterslip, usually is characterized by above-average creep rates for several months to a few years. In several cases, minor step-like movements, called coseismic slip events, have occurred at or near the times of mainshocks. One extreme case of coseismic slip, recorded at Cienega Winery on the San Andreas fault 17.5 km southeast of San Juan Bautista, consisted of 11 mm of sudden displacement coincident with earthquakes of ML=5.3 and ML=5.2 that occurred 2.5 minutes apart on 9 April 1961. At least one of these shocks originated on the main fault beneath the winery. Creep activity subsequently stopped at the winery for 19 months, then gradually returned to a nearly steady rate slightly below the previous long-term average. The phenomena mentioned above can be explained in terms of simple models consisting of relatively weak material along shallow reaches of the fault responding to changes in load imposed by sudden slip within the underlying seismogenic zone. In addition to coseismic slip and afterslip phenomena, however, pre-earthquake retardations in creep rates also have been observed. Onsets of significant, persistent decreases in creep rates have occurred at several sites 12 months or more before the times of moderate earthquakes. A 44-month retardation before the 1979 ML=5.9 Coyote Lake earthquake on the Calaveras fault was recorded at the Shore Road creepmeter site 10 km northwest of Hollister. Creep retardation on the San Andreas fault near San Juan Bautista has been evident in records from one creepmeter site for

  15. Retardations in fault creep rates before local moderate earthquakes along the San Andreas fault system, central California

    NASA Astrophysics Data System (ADS)

    Burford, Robert O.

    1988-06-01

    Records of shallow aseismic slip (fault creep) obtained along parts of the San Andreas and Calaveras faults in central California demonstrate that significant changes in creep rates often have been associated with local moderate earthquakes. An immediate postearthquake increase followed by gradual, long-term decay back to a previous background rate is generally the most obvious earthquake effect on fault creep. This phenomenon, identified as aseismic afterslip, usually is characterized by above-average creep rates for several months to a few years. In several cases, minor step-like movements, called coseismic slip events, have occurred at or near the times of mainshocks. One extreme case of coseismic slip, recorded at Cienega Winery on the San Andreas fault 17.5 km southeast of San Juan Bautista, consisted of 11 mm of sudden displacement coincident with earthquakes of M L =5.3 and M L =5.2 that occurred 2.5 minutes apart on 9 April 1961. At least one of these shocks originated on the main fault beneath the winery. Creep activity subsequently stopped at the winery for 19 months, then gradually returned to a nearly steady rate slightly below the previous long-term average. The phenomena mentioned above can be explained in terms of simple models consisting of relatively weak material along shallow reaches of the fault responding to changes in load imposed by sudden slip within the underlying seismogenic zone. In addition to coseismic slip and afterslip phenomena, however, pre-earthquake retardations in creep rates also have been observed. Onsets of significant, persistent decreases in creep rates have occurred at several sites 12 months or more before the times of moderate earthquakes. A 44-month retardation before the 1979 M L =5.9 Coyote Lake earthquake on the Calaveras fault was recorded at the Shore Road creepmeter site 10 km northwest of Hollister. Creep retardation on the San Andreas fault near San Juan Bautista has been evident in records from one creepmeter

  16. Earthquake-driven fluid flow rates inferred from borehole temperature measurements within the Japan Trench plate boundary fault zone

    NASA Astrophysics Data System (ADS)

    Fulton, P. M.; Brodsky, E. E.

    2016-12-01

    Using borehole sub-seafloor temperature measurements, we have recently identified signatures suggestive of earthquake-driven fluid pulses within the Japan Trench plate boundary fault zone during a major aftershock sequence. Here we use numerical models to show that these signatures are consistent with time-varying fluid flow rates out of permeable zones within the formation into the borehole annulus. In addition, we also identify an apparent time-varying sensitivity of whether suspected fluid pulses occur in response to earthquakes of a given magnitude and distance. The results suggest a damage and healing process and therefore provides a mechanism to allow for a disproportionate amount of heat and chemical transport in the short time frame after an earthquake. Our observations come from an observatory installed across the main plate boundary fault as part of IODP's Japan Trench Fast Drilling Project (JFAST) following the March 2011 Mw 9.0 Tohoku-oki earthquake. It operated from July 2012 - April 2013 during which a Mw 7.3 earthquake and numerous aftershocks occurred. High-resolution temperature time series data reveal spatially correlated transients in response to earthquakes with distinct patterns interpreted to reflect advection by transient pulses of fluid flow from permeable zones into the borehole annulus. Typical transients involve perturbations over 12 m with increases of 10 mK that build over 0.1 days at shallower depths and decreases at deeper depths. They are consistently centered around 792.5 m below seafloor (mbsf) where a secondary fault and permeable zone have been independently identified within the damage zone above the main plate boundary fault at 820 mbsf . Model simulations suggest transient flow rates of up to 10-3m/s from the formation that quickly decrease. Comparison of characteristics of earthquakes identified in nearby ocean bottom pressure measurements suggest there is not a clear relationship between fluid pulses and static strain. There

  17. Inferring rate and state friction parameters from a rupture model of the 1995 Hyogo-ken Nanbu (Kobe) Japan earthquake

    USGS Publications Warehouse

    Guatteri, Mariagiovanna; Spudich, P.; Beroza, G.C.

    2001-01-01

    We consider the applicability of laboratory-derived rate- and state-variable friction laws to the dynamic rupture of the 1995 Kobe earthquake. We analyze the shear stress and slip evolution of Ide and Takeo's [1997] dislocation model, fitting the inferred stress change time histories by calculating the dynamic load and the instantaneous friction at a series of points within the rupture area. For points exhibiting a fast-weakening behavior, the Dieterich-Ruina friction law, with values of dc = 0.01-0.05 m for critical slip, fits the stress change time series well. This range of dc is 10-20 times smaller than the slip distance over which the stress is released, Dc, which previous studies have equated with the slip-weakening distance. The limited resolution and low-pass character of the strong motion inversion degrades the resolution of the frictional parameters and suggests that the actual dc is less than this value. Stress time series at points characterized by a slow-weakening behavior are well fitted by the Dieterich-Ruina friction law with values of dc ??? 0.01-0.05 m. The apparent fracture energy Gc can be estimated from waveform inversions more stably than the other friction parameters. We obtain a Gc = 1.5??106 J m-2 for the 1995 Kobe earthquake, in agreement with estimates for previous earthquakes. From this estimate and a plausible upper bound for the local rock strength we infer a lower bound for Dc of about 0.008 m. Copyright 2001 by the American Geophysical Union.

  18. Modelling the elements of country vulnerability to earthquake disasters.

    PubMed

    Asef, M R

    2008-09-01

    Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.

  19. Future WGCEP Models and the Need for Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Field, E. H.

    2008-12-01

    The 2008 Working Group on California Earthquake Probabilities (WGCEP) recently released the Uniform California Earthquake Rupture Forecast version 2 (UCERF 2), developed jointly by the USGS, CGS, and SCEC with significant support from the California Earthquake Authority. Although this model embodies several significant improvements over previous WGCEPs, the following are some of the significant shortcomings that we hope to resolve in a future UCERF3: 1) assumptions of fault segmentation and the lack of fault-to-fault ruptures; 2) the lack of an internally consistent methodology for computing time-dependent, elastic-rebound-motivated renewal probabilities; 3) the lack of earthquake clustering/triggering effects; and 4) unwarranted model complexity. It is believed by some that physics-based earthquake simulators will be key to resolving these issues, either as exploratory tools to help guide the present statistical approaches, or as a means to forecast earthquakes directly (although significant challenges remain with respect to the latter).

  20. Earthquake Forecasting in Northeast India using Energy Blocked Model

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, D. K.

    2009-12-01

    In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes

  1. Modeling of earthquake ground motion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Thrainsson, Hjortur

    In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation

  2. Failure of self-similarity for large (Mw > 81/4) earthquakes.

    USGS Publications Warehouse

    Hartzell, S.H.; Heaton, T.H.

    1988-01-01

    Compares teleseismic P-wave records for earthquakes in the magnitude range from 6.0-9.5 with synthetics for a self-similar, omega 2 source model and conclude that the energy radiated by very large earthquakes (Mw > 81/4) is not self-similar to that radiated from smaller earthquakes (Mw < 81/4). Furthermore, in the period band from 2 sec to several tens of seconds, it is concluded that large subduction earthquakes have an average spectral decay rate of omega -1.5. This spectral decay rate is consistent with a previously noted tendency of the omega 2 model to overestimate Ms for large earthquakes.-Authors

  3. Earthquake Clustering in Noisy Viscoelastic Systems

    NASA Astrophysics Data System (ADS)

    Dicaprio, C. J.; Simons, M.; Williams, C. A.; Kenner, S. J.

    2006-12-01

    Geologic studies show evidence for temporal clustering of earthquakes on certain fault systems. Since post- seismic deformation may result in a variable loading rate on a fault throughout the inter-seismic period, it is reasonable to expect that the rheology of the non-seismogenic lower crust and mantle lithosphere may play a role in controlling earthquake recurrence times. Previously, the role of rheology of the lithosphere on the seismic cycle had been studied with a one-dimensional spring-dashpot-slider model (Kenner and Simons [2005]). In this study we use the finite element code PyLith to construct a two-dimensional continuum model a strike-slip fault in an elastic medium overlying one or more linear Maxwell viscoelastic layers loaded in the far field by a constant velocity boundary condition. Taking advantage of the linear properties of the model, we use the finite element solution to one earthquake as a spatio-temporal Green's function. Multiple Green's function solutions, scaled by the size of each earthquake, are then summed to form an earthquake sequence. When the shear stress on the fault reaches a predefined yield stress it is allowed to slip, relieving all accumulated shear stress. Random variation in the fault yield stress from one earthquake to the next results in a temporally clustered earthquake sequence. The amount of clustering depends on a non-dimensional number, W, called the Wallace number. For models with one viscoelastic layer, W is equal to the standard deviation of the earthquake stress drop divided by the viscosity times the tectonic loading rate. This definition of W is modified from the original one used in Kenner and Simons [2005] by using the standard deviation of the stress drop instead of the mean stress drop. We also use a new, more appropriate, metric to measure the amount of temporal clustering of the system. W is the ratio of the viscoelastic relaxation rate of the system to the tectonic loading rate of the system. For values of

  4. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    rupture models. 1d-layered medium models are implemented for both near- and far-field data predictions. A highlight of our approach is a weak dependence on earthquake bulletin information: hypocenter locations and source origin times are relatively free source model parameters. We present this harmonized source modelling environment based on example earthquake studies, e.g. the 2010 Haiti earthquake, the 2009 L'Aquila earthquake and others. We discuss the benefit of combined-data non-linear modelling on the resolution of first-order rupture parameters, e.g. location, size, orientation, mechanism, moment/slip and rupture propagation. The presented studies apply our newly developed software tools which build up on the open-source seismological software toolbox pyrocko (www.pyrocko.org) in the form of modules. We aim to facilitate a better exploitation of open global data sets for a wide community studying tectonics, but the tools are applicable also for a large range of regional to local earthquake studies. Our developments therefore ensure a large flexibility in the parametrization of medium models (e.g. 1d to 3d medium models), source models (e.g. explosion sources, full moment tensor sources, heterogeneous slip models, etc) and of the predicted data (e.g. (high-rate) GPS, strong motion, tilt). This work is conducted within the project "Bridging Geodesy and Seismology" (www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.

  5. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    NASA Astrophysics Data System (ADS)

    Thomas, Marion Y.; Bhat, Harsha S.

    2018-05-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  6. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    NASA Astrophysics Data System (ADS)

    Thomas, M. Y.; Bhat, H. S.

    2017-12-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  7. Salient Features of the 2015 Gorkha, Nepal Earthquake in Relation to Earthquake Cycle and Dynamic Rupture Models

    NASA Astrophysics Data System (ADS)

    Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.

    2015-12-01

    Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity

  8. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  9. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  10. GEM - The Global Earthquake Model

    NASA Astrophysics Data System (ADS)

    Smolka, A.

    2009-04-01

    Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a

  11. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  12. Implications of fault constitutive properties for earthquake prediction.

    PubMed Central

    Dieterich, J H; Kilgore, B

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

  13. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  14. An Earthquake Rupture Forecast model for central Italy submitted to CSEP project

    NASA Astrophysics Data System (ADS)

    Pace, B.; Peruzza, L.

    2009-04-01

    We defined a seismogenic source model for central Italy and computed the relative forecast scenario, in order to submit the results to the CSEP (Collaboratory for the study of Earthquake Predictability, www.cseptesting.org) project. The goal of CSEP project is developing a virtual, distributed laboratory that supports a wide range of scientific prediction experiments in multiple regional or global natural laboratories, and Italy is the first region in Europe for which fully prospective testing is planned. The model we propose is essentially the Layered Seismogenic Source for Central Italy (LaSS-CI) we published in 2006 (Pace et al., 2006). It is based on three different layers of sources: the first one collects the individual faults liable to generate major earthquakes (M >5.5); the second layer is given by the instrumental seismicity analysis of the past two decades, which allows us to evaluate the background seismicity (M ~<5.0). The third layer utilizes all the instrumental earthquakes and the historical events not correlated to known structures (4.5earthquakes by Brownian passage time distribution. Beside the original model, updated earthquake rupture forecasts only for individual sources are released too, in the light of recent analyses (Peruzza et al., 2008; Zoeller et al., 2008). We computed forecasts based on the LaSS-CI model for two time-windows: 5 and 10 years. Each model to be tested defines a forecasted earthquake rate in magnitude bins of 0.1 unit steps in the range M5-9, for the periods 1st April 2009 to 1st April 2014, and 1st April 2009 to 1st April 2019. B. Pace, L. Peruzza, G. Lavecchia, and P. Boncio (2006) Layered Seismogenic Source

  15. Bayesian exploration of recent Chilean earthquakes

    NASA Astrophysics Data System (ADS)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Liang, Cunren; Agram, Piyush; Owen, Susan; Ortega, Francisco; Minson, Sarah

    2016-04-01

    The South-American subduction zone is an exceptional natural laboratory for investigating the behavior of large faults over the earthquake cycle. It is also a playground to develop novel modeling techniques combining different datasets. Coastal Chile was impacted by two major earthquakes in the last two years: the 2015 M 8.3 Illapel earthquake in central Chile and the 2014 M 8.1 Iquique earthquake that ruptured the central portion of the 1877 seismic gap in northern Chile. To gain better understanding of the distribution of co-seismic slip for those two earthquakes, we derive joint kinematic finite fault models using a combination of static GPS offsets, radar interferograms, tsunami measurements, high-rate GPS waveforms and strong motion data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the Green's functions. The results reveal different rupture behaviors for the 2014 Iquique and 2015 Illapel earthquakes. The 2014 Iquique earthquake involved a sharp slip zone and did not rupture to the trench. The 2015 Illapel earthquake nucleated close to the coast and propagated toward the trench with significant slip apparently reaching the trench or at least very close to the trench. At the inherent resolution of our models, we also present the relationship of co-seismic models to the spatial distribution of foreshocks, aftershocks and fault coupling models.

  16. Seismicity rate changes along the central California coast due to stress changes from the 2003 M 6.5 San Simeon and 2004 M 6.0 Parkfield earthquakes

    USGS Publications Warehouse

    Aron, A.; Hardebeck, J.L.

    2009-01-01

    We investigated the relationship between seismicity rate changes and modeled Coulomb static stress changes from the 2003 M 6.5 San Simeon and the 2004 M 6.0 Parkfield earthquakes in central California. Coulomb stress modeling indicates that the San Simeon mainshock loaded parts of the Rinconada, Hosgri, and San Andreas strike-slip faults, along with the reverse faults of the southern Los Osos domain. All of these loaded faults, except for the San Andreas, experienced a seismicity rate increase at the time of the San Simeon mainshock. The Parkfield earthquake occurred 9 months later on the loaded portion of the San Andreas fault. The Parkfield earthquake unloaded the Hosgri fault and the reverse faults of the southern Los Osos domain, which both experienced seismicity rate decreases at the time of the Parkfield event, although the decreases may be related to the decay of San Simeon-triggered seismicity. Coulomb stress unloading from the Parkfield earthquake appears to have altered the aftershock decay rate of the southern cluster of San Simeon after-shocks, which is deficient compared to the expected number of aftershocks from the Omori decay parameters based on the pre-Parkfield aftershocks. Dynamic stress changes cannot explain the deficiency of aftershocks, providing evidence that static stress changes affect earthquake occurrence. However, a burst of seismicity following the Parkfield earthquake at Ragged Point, where the static stress was decreased, provides evidence for dynamic stress triggering. It therefore appears that both Coulomb static stress changes and dynamic stress changes affect the seismicity rate.

  17. New constraints on the rupture process of the 1999 August 17 Izmit earthquake deduced from estimates of stress glut rate moments

    NASA Astrophysics Data System (ADS)

    Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.

    2004-12-01

    This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.

  18. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  19. Transient triggering of near and distant earthquakes

    USGS Publications Warehouse

    Gomberg, J.; Blanpied, M.L.; Beeler, N.M.

    1997-01-01

    We demonstrate qualitatively that frictional instability theory provides a context for understanding how earthquakes may be triggered by transient loads associated with seismic waves from near and distance earthquakes. We assume that earthquake triggering is a stick-slip process and test two hypotheses about the effect of transients on the timing of instabilities using a simple spring-slider model and a rate- and state-dependent friction constitutive law. A critical triggering threshold is implicit in such a model formulation. Our first hypothesis is that transient loads lead to clock advances; i.e., transients hasten the time of earthquakes that would have happened eventually due to constant background loading alone. Modeling results demonstrate that transient loads do lead to clock advances and that the triggered instabilities may occur after the transient has ceased (i.e., triggering may be delayed). These simple "clock-advance" models predict complex relationships between the triggering delay, the clock advance, and the transient characteristics. The triggering delay and the degree of clock advance both depend nonlinearly on when in the earthquake cycle the transient load is applied. This implies that the stress required to bring about failure does not depend linearly on loading time, even when the fault is loaded at a constant rate. The timing of instability also depends nonlinearly on the transient loading rate, faster rates more rapidly hastening instability. This implies that higher-frequency and/or longer-duration seismic waves should increase the amount of clock advance. These modeling results and simple calculations suggest that near (tens of kilometers) small/moderate earthquakes and remote (thousands of kilometers) earthquakes with magnitudes 2 to 3 units larger may be equally effective at triggering seismicity. Our second hypothesis is that some triggered seismicity represents earthquakes that would not have happened without the transient load (i

  20. Modeling earthquake sequences along the Manila subduction zone: Effects of three-dimensional fault geometry

    NASA Astrophysics Data System (ADS)

    Yu, Hongyu; Liu, Yajing; Yang, Hongfeng; Ning, Jieyuan

    2018-05-01

    To assess the potential of catastrophic megathrust earthquakes (MW > 8) along the Manila Trench, the eastern boundary of the South China Sea, we incorporate a 3D non-planar fault geometry in the framework of rate-state friction to simulate earthquake rupture sequences along the fault segment between 15°N-19°N of northern Luzon. Our simulation results demonstrate that the first-order fault geometry heterogeneity, the transitional-segment (possibly related to the subducting Scarborough seamount chain) connecting the steeper south segment and the flatter north segment, controls earthquake rupture behaviors. The strong along-strike curvature at the transitional-segment typically leads to partial ruptures of MW 8.3 and MW 7.8 along the southern and northern segments respectively. The entire fault occasionally ruptures in MW 8.8 events when the cumulative stress in the transitional-segment is sufficiently high to overcome the geometrical inhibition. Fault shear stress evolution, represented by the S-ratio, is clearly modulated by the width of seismogenic zone (W). At a constant plate convergence rate, a larger W indicates on average lower interseismic stress loading rate and longer rupture recurrence period, and could slow down or sometimes stop ruptures that initiated from a narrower portion. Moreover, the modeled interseismic slip rate before whole-fault rupture events is comparable with the coupling state that was inferred from the interplate seismicity distribution, suggesting the Manila trench could potentially rupture in a M8+ earthquake.

  1. Simulation Based Earthquake Forecasting with RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.

    2016-12-01

    We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.

  2. Break of slope in earthquake size distribution and creep rate along the San Andreas Fault system

    NASA Astrophysics Data System (ADS)

    Shebalin, P.; Narteau, C.; Vorobieva, I.

    2017-12-01

    Crustal faults accommodate slip either by a succession of earthquakes or continuous slip, andin most instances, both these seismic and aseismic processes coexist. Recorded seismicity and geodeticmeasurements are therefore two complementary data sets that together document ongoing deformationalong active tectonic structures. Here we study the influence of stable sliding on earthquake statistics.We show that creep along the San Andreas Fault is responsible for a break of slope in the earthquake sizedistribution. This slope increases with an increasing creep rate for larger magnitude ranges, whereas itshows no systematic dependence on creep rate for smaller magnitude ranges. This is interpreted as a deficitof large events under conditions of faster creep where seismic ruptures are less likely to propagate. Theseresults suggest that the earthquake size distribution does not only depend on the level of stress but also onthe type of deformation.

  3. Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Kubo, H.

    2013-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.

  4. Numerical Investigation of Earthquake Nucleation on a Laboratory-Scale Heterogeneous Fault with Rate-and-State Friction

    NASA Astrophysics Data System (ADS)

    Higgins, N.; Lapusta, N.

    2014-12-01

    Many large earthquakes on natural faults are preceded by smaller events, often termed foreshocks, that occur close in time and space to the larger event that follows. Understanding the origin of such events is important for understanding earthquake physics. Unique laboratory experiments of earthquake nucleation in a meter-scale slab of granite (McLaskey and Kilgore, 2013; McLaskey et al., 2014) demonstrate that sample-scale nucleation processes are also accompanied by much smaller seismic events. One potential explanation for these foreshocks is that they occur on small asperities - or bumps - on the fault interface, which may also be the locations of smaller critical nucleation size. We explore this possibility through 3D numerical simulations of a heterogeneous 2D fault embedded in a homogeneous elastic half-space, in an attempt to qualitatively reproduce the laboratory observations of foreshocks. In our model, the simulated fault interface is governed by rate-and-state friction with laboratory-relevant frictional properties, fault loading, and fault size. To create favorable locations for foreshocks, the fault surface heterogeneity is represented as patches of increased normal stress, decreased characteristic slip distance L, or both. Our simulation results indicate that one can create a rate-and-state model of the experimental observations. Models with a combination of higher normal stress and lower L at the patches are closest to matching the laboratory observations of foreshocks in moment magnitude, source size, and stress drop. In particular, we find that, when the local compression is increased, foreshocks can occur on patches that are smaller than theoretical critical nucleation size estimates. The additional inclusion of lower L for these patches helps to keep stress drops within the range observed in experiments, and is compatible with the asperity model of foreshock sources, since one would expect more compressed spots to be smoother (and hence have

  5. Mega-earthquakes rupture flat megathrusts.

    PubMed

    Bletery, Quentin; Thomas, Amanda M; Rempel, Alan W; Karlstrom, Leif; Sladen, Anthony; De Barros, Louis

    2016-11-25

    The 2004 Sumatra-Andaman and 2011 Tohoku-Oki earthquakes highlighted gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution: A fast convergence rate and young buoyant lithosphere are not required to produce mega-earthquakes. We calculated the curvature along the major subduction zones of the world, showing that mega-earthquakes preferentially rupture flat (low-curvature) interfaces. A simplified analytic model demonstrates that heterogeneity in shear strength increases with curvature. Shear strength on flat megathrusts is more homogeneous, and hence more likely to be exceeded simultaneously over large areas, than on highly curved faults. Copyright © 2016, American Association for the Advancement of Science.

  6. Flexible kinematic earthquake rupture inversion of tele-seismic waveforms: Application to the 2013 Balochistan, Pakistan earthquake

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; Yagi, Y.; Okuwaki, R.; Kasahara, A.

    2017-12-01

    The kinematic earthquake rupture models are useful to derive statistics and scaling properties of the large and great earthquakes. However, the kinematic rupture models for the same earthquake are often different from one another. Such sensitivity of the modeling prevents us to understand the statistics and scaling properties of the earthquakes. Yagi and Fukahata (2011) introduces the uncertainty of Green's function into the tele-seismic waveform inversion, and shows that the stable spatiotemporal distribution of slip-rate can be obtained by using an empirical Bayesian scheme. One of the unsolved problems in the inversion rises from the modeling error originated from an uncertainty of a fault-model setting. Green's function near the nodal plane of focal mechanism is known to be sensitive to the slight change of the assumed fault geometry, and thus the spatiotemporal distribution of slip-rate should be distorted by the modeling error originated from the uncertainty of the fault model. We propose a new method accounting for the complexity in the fault geometry by additionally solving the focal mechanism on each space knot. Since a solution of finite source inversion gets unstable with an increasing of flexibility of the model, we try to estimate a stable spatiotemporal distribution of focal mechanism in the framework of Yagi and Fukahata (2011). We applied the proposed method to the 52 tele-seismic P-waveforms of the 2013 Balochistan, Pakistan earthquake. The inverted-potency distribution shows unilateral rupture propagation toward southwest of the epicenter, and the spatial variation of the focal mechanisms shares the same pattern as the fault-curvature along the tectonic fabric. On the other hand, the broad pattern of rupture process, including the direction of rupture propagation, cannot be reproduced by an inversion analysis under the assumption that the faulting occurred on a single flat plane. These results show that the modeling error caused by simplifying the

  7. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    NASA Astrophysics Data System (ADS)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  8. Estimation of completeness magnitude with a Bayesian modeling of daily and weekly variations in earthquake detectability

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2014-12-01

    In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the

  9. Turkish Compulsory Earthquake Insurance (TCIP)

    NASA Astrophysics Data System (ADS)

    Erdik, M.; Durukal, E.; Sesetyan, K.

    2009-04-01

    Through a World Bank project a government-sponsored Turkish Catastrophic Insurance Pool (TCIP) is created in 2000 with the essential aim of transferring the government's financial burden of replacing earthquake-damaged housing to international reinsurance and capital markets. Providing coverage to about 2.9 Million homeowners TCIP is the largest insurance program in the country with about 0.5 Billion USD in its own reserves and about 2.3 Billion USD in total claims paying capacity. The total payment for earthquake damage since 2000 (mostly small, 226 earthquakes) amounts to about 13 Million USD. The country-wide penetration rate is about 22%, highest in the Marmara region (30%) and lowest in the south-east Turkey (9%). TCIP is the sole-source provider of earthquake loss coverage up to 90,000 USD per house. The annual premium, categorized on the basis of earthquake zones type of structure, is about US90 for a 100 square meter reinforced concrete building in the most hazardous zone with 2% deductible. The earthquake engineering related shortcomings of the TCIP is exemplified by fact that the average rate of 0.13% (for reinforced concrete buildings) with only 2% deductible is rather low compared to countries with similar earthquake exposure. From an earthquake engineering point of view the risk underwriting (Typification of housing units to be insured, earthquake intensity zonation and the sum insured) of the TCIP needs to be overhauled. Especially for large cities, models can be developed where its expected earthquake performance (and consequently the insurance premium) can be can be assessed on the basis of the location of the unit (microzoned earthquake hazard) and basic structural attributes (earthquake vulnerability relationships). With such an approach, in the future the TCIP can contribute to the control of construction through differentiation of premia on the basis of earthquake vulnerability.

  10. Foreshock occurrence before large earthquakes

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  11. Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization

    NASA Astrophysics Data System (ADS)

    Lee, Kyungbook; Song, Seok Goo

    2017-09-01

    Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.

  12. Earthquake insurance pricing: a risk-based approach.

    PubMed

    Lin, Jeng-Hsiang

    2018-04-01

    Flat earthquake premiums are 'uniformly' set for a variety of buildings in many countries, neglecting the fact that the risk of damage to buildings by earthquakes is based on a wide range of factors. How these factors influence the insurance premiums is worth being studied further. Proposed herein is a risk-based approach to estimate the earthquake insurance rates of buildings. Examples of application of the approach to buildings located in Taipei city of Taiwan were examined. Then, the earthquake insurance rates for the buildings investigated were calculated and tabulated. To fulfil insurance rating, the buildings were classified into 15 model building types according to their construction materials and building height. Seismic design levels were also considered in insurance rating in response to the effect of seismic zone and construction years of buildings. This paper may be of interest to insurers, actuaries, and private and public sectors of insurance. © 2018 The Author(s). Disasters © Overseas Development Institute, 2018.

  13. Earthquakes drive focused denudation along a tectonically active mountain front

    NASA Astrophysics Data System (ADS)

    Li, Gen; West, A. Joshua; Densmore, Alexander L.; Jin, Zhangdong; Zhang, Fei; Wang, Jin; Clark, Marin; Hilton, Robert G.

    2017-08-01

    Earthquakes cause widespread landslides that can increase erosional fluxes observed over years to decades. However, the impact of earthquakes on denudation over the longer timescales relevant to orogenic evolution remains elusive. Here we assess erosion associated with earthquake-triggered landslides in the Longmen Shan range at the eastern margin of the Tibetan Plateau. We use the Mw 7.9 2008 Wenchuan and Mw 6.6 2013 Lushan earthquakes to evaluate how seismicity contributes to the erosional budget from short timescales (annual to decadal, as recorded by sediment fluxes) to long timescales (kyr to Myr, from cosmogenic nuclides and low temperature thermochronology). Over this wide range of timescales, the highest rates of denudation in the Longmen Shan coincide spatially with the region of most intense landsliding during the Wenchuan earthquake. Across sixteen gauged river catchments, sediment flux-derived denudation rates following the Wenchuan earthquake are closely correlated with seismic ground motion and the associated volume of Wenchuan-triggered landslides (r2 > 0.6), and to a lesser extent with the frequency of high intensity runoff events (r2 = 0.36). To assess whether earthquake-induced landsliding can contribute importantly to denudation over longer timescales, we model the total volume of landslides triggered by earthquakes of various magnitudes over multiple earthquake cycles. We combine models that predict the volumes of landslides triggered by earthquakes, calibrated against the Wenchuan and Lushan events, with an earthquake magnitude-frequency distribution. The long-term, landslide-sustained "seismic erosion rate" is similar in magnitude to regional long-term denudation rates (∼0.5-1 mm yr-1). The similar magnitude and spatial coincidence suggest that earthquake-triggered landslides are a primary mechanism of long-term denudation in the frontal Longmen Shan. We propose that the location and intensity of seismogenic faulting can contribute to

  14. Human casualties in earthquakes: Modelling and mitigation

    USGS Publications Warehouse

    Spence, R.J.S.; So, E.K.M.

    2011-01-01

    Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.

  15. Intraplate triggered earthquakes: Observations and interpretation

    USGS Publications Warehouse

    Hough, S.E.; Seeber, L.; Armbruster, J.G.

    2003-01-01

    We present evidence that at least two of the three 1811-1812 New Madrid, central United States, mainshocks and the 1886 Charleston, South Carolina, earthquake triggered earthquakes at regional distances. In addition to previously published evidence for triggered earthquakes in the northern Kentucky/southern Ohio region in 1812, we present evidence suggesting that triggered events might have occurred in the Wabash Valley, to the south of the New Madrid Seismic Zone, and near Charleston, South Carolina. We also discuss evidence that earthquakes might have been triggered in northern Kentucky within seconds of the passage of surface waves from the 23 January 1812 New Madrid mainshock. After the 1886 Charleston earthquake, accounts suggest that triggered events occurred near Moodus, Connecticut, and in southern Indiana. Notwithstanding the uncertainty associated with analysis of historical accounts, there is evidence that at least three out of the four known Mw 7 earthquakes in the central and eastern United States seem to have triggered earthquakes at distances beyond the typically assumed aftershock zone of 1-2 mainshock fault lengths. We explore the possibility that remotely triggered earthquakes might be common in low-strain-rate regions. We suggest that in a low-strain-rate environment, permanent, nonelastic deformation might play a more important role in stress accumulation than it does in interplate crust. Using a simple model incorporating elastic and anelastic strain release, we show that, for realistic parameter values, faults in intraplate crust remain close to their failure stress for a longer part of the earthquake cycle than do faults in high-strain-rate regions. Our results further suggest that remotely triggered earthquakes occur preferentially in regions of recent and/or future seismic activity, which suggests that faults are at a critical stress state in only some areas. Remotely triggered earthquakes may thus serve as beacons that identify regions of

  16. Instability model for recurring large and great earthquakes in southern California

    USGS Publications Warehouse

    Stuart, W.D.

    1985-01-01

    The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.

  17. Modeling the behavior of an earthquake base-isolated building.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coveney, V. A.; Jamil, S.; Johnson, D. E.

    1997-11-26

    Protecting a structure against earthquake excitation by supporting it on laminated elastomeric bearings has become a widely accepted practice. The ability to perform accurate simulation of the system, including FEA of the bearings, would be desirable--especially for key installations. In this paper attempts to model the behavior of elastomeric earthquake bearings are outlined. Attention is focused on modeling highly-filled, low-modulus, high-damping elastomeric isolator systems; comparisons are made between standard triboelastic solid model predictions and test results.

  18. A spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3‐ETAS): Toward an operational earthquake forecast

    USGS Publications Warehouse

    Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.

    2017-01-01

    We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as

  19. Prospective Tests of Southern California Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.

    2004-12-01

    We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability

  20. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    USGS Publications Warehouse

    Hayes, Gavin

    2017-01-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques.I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called “moment deficit,” calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of “earthquake super-cycles” observed in some global subduction zones.

  1. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    NASA Astrophysics Data System (ADS)

    Hayes, Gavin P.

    2017-06-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques. I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called ;moment deficit,; calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of ;earthquake super-cycles; observed in some global subduction zones.

  2. Quasi-dynamic earthquake fault systems with rheological heterogeneity

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.

    2009-12-01

    Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.

  3. Long‐term creep rates on the Hayward Fault: evidence for controls on the size and frequency of large earthquakes

    USGS Publications Warehouse

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John

    2012-01-01

    The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.

  4. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  5. Modeling the Fluid Withdraw and Injection Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Meng, C.

    2016-12-01

    We present an open source numerical code, Defmod, that allows one to model the induced seismicity in an efficient and standalone manner. The fluid withdraw and injection induced earthquake has been a great concern to the industries including oil/gas, wastewater disposal and CO2 sequestration. Being able to numerically model the induced seismicity is long desired. To do that, one has to consider at lease two processes, a steady process that describes the inducing and aseismic stages before and in between the seismic events, and an abrupt process that describes the dynamic fault rupture accompanied by seismic energy radiations during the events. The steady process can be adequately modeled by a quasi-static model, while the abrupt process has to be modeled by a dynamic model. In most of the published modeling works, only one of these processes is considered. The geomechanicists and reservoir engineers are focused more on the quasi-static modeling, whereas the geophysicists and seismologists are focused more on the dynamic modeling. The finite element code Defmod combines these two models into a hybrid model that uses the failure criterion and frictional laws to adaptively switch between the (quasi-)static and dynamic states. The code is capable of modeling episodic fault rupture driven by quasi-static loading, e.g. due to reservoir fluid withdraw and/or injection, and by dynamic loading, e.g. due to the foregoing earthquakes. We demonstrate a case study for the 2013 Azle earthquake.

  6. Foreshock and aftershocks in simple earthquake models.

    PubMed

    Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R

    2015-02-27

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.

  7. Predictors of psychological resilience amongst medical students following major earthquakes.

    PubMed

    Carter, Frances; Bell, Caroline; Ali, Anthony; McKenzie, Janice; Boden, Joseph M; Wilkinson, Timothy; Bell, Caroline

    2016-05-06

    To identify predictors of self-reported psychological resilience amongst medical students following major earthquakes in Canterbury in 2010 and 2011. Two hundred and fifty-three medical students from the Christchurch campus, University of Otago, were invited to participate in an electronic survey seven months following the most severe earthquake. Students completed the Connor-Davidson Resilience Scale, the Depression, Anxiety and Stress Scale, the Post-traumatic Disorder Checklist, the Work and Adjustment Scale, and the Eysenck Personality Questionnaire. Likert scales and other questions were also used to assess a range of variables including demographic and historical variables (eg, self-rated resilience prior to the earthquakes), plus the impacts of the earthquakes. The response rate was 78%. Univariate analyses identified multiple variables that were significantly associated with higher resilience. Multiple linear regression analyses produced a fitted model that was able to explain 35% of the variance in resilience scores. The best predictors of higher resilience were: retrospectively-rated personality prior to the earthquakes (higher extroversion and lower neuroticism); higher self-rated resilience prior to the earthquakes; not being exposed to the most severe earthquake; and less psychological distress following the earthquakes. Psychological resilience amongst medical students following major earthquakes was able to be predicted to a moderate extent.

  8. Source models of M-7 class earthquakes in the rupture area of the 2011 Tohoku-Oki Earthquake by near-field tsunami modeling

    NASA Astrophysics Data System (ADS)

    Kubota, T.; Hino, R.; Inazu, D.; Saito, T.; Iinuma, T.; Suzuki, S.; Ito, Y.; Ohta, Y.; Suzuki, K.

    2012-12-01

    We estimated source models of small amplitude tsunami associated with M-7 class earthquakes in the rupture area of the 2011 Tohoku-Oki Earthquake using near-field records of tsunami recorded by ocean bottom pressure gauges (OBPs). The largest (Mw=7.3) foreshock of the Tohoku-Oki earthquake, occurred on 9 Mar., two days before the mainshock. Tsunami associated with the foreshock was clearly recorded by seven OBPs, as well as coseismic vertical deformation of the seafloor. Assuming a planer fault along the plate boundary as a source, the OBP records were inverted for slip distribution. As a result, the most of the coseismic slip was found to be concentrated in the area of about 40 x 40 km in size and located to the north-west of the epicenter, suggesting downdip rupture propagation. Seismic moment of our tsunami waveform inversion is 1.4 x 10^20 Nm, equivalent to Mw 7.3. On 2011 July 10th, an earthquake of Mw 7.0 occurred near the hypocenter of the mainshock. Its relatively deep focus and strike-slip focal mechanism indicate that this earthquake was an intraslab earthquake. The earthquake was associated with small amplitude tsunami. By using the OBP records, we estimated a model of the initial sea-surface height distribution. Our tsunami inversion showed that a pair of uplift/subsiding eyeballs was required to explain the observed tsunami waveform. The spatial pattern of the seafloor deformation is consistent with the oblique strike-slip solution obtained by the seismic data analyses. The location and strike of the hinge line separating the uplift and subsidence zones correspond well to the linear distribution of the aftershock determined by using local OBS data (Obana et al., 2012).

  9. Reconciling postseismic and interseismic surface deformation around strike-slip faults: Earthquake-cycle models with finite ruptures and viscous shear zones

    NASA Astrophysics Data System (ADS)

    Hearn, E. H.

    2013-12-01

    Geodetic surface velocity data show that after an energetic but brief phase of postseismic deformation, surface deformation around most major strike-slip faults tends to be localized and stationary, and can be modeled with a buried elastic dislocation creeping at or near the Holocene slip rate. Earthquake-cycle models incorporating an elastic layer over a Maxwell viscoelastic halfspace cannot explain this, even when the earliest postseismic deformation is ignored or modeled (e.g., as frictional afterslip). Models with heterogeneously distributed low-viscosity materials or power-law rheologies perform better, but to explain all phases of earthquake-cycle deformation, Burgers viscoelastic materials with extreme differences between their Maxwell and Kelvin element viscosities seem to be required. I present a suite of earthquake-cycle models to show that postseismic and interseismic deformation may be reconciled for a range of lithosphere architectures and rheologies if finite rupture length is taken into account. These models incorporate high-viscosity lithosphere optionally cut by a viscous shear zone, and a lower-viscosity mantle asthenosphere (all with a range of viscoelastic rheologies and parameters). Characteristic earthquakes with Mw = 7.0 - 7.9 are investigated, with interseismic intervals adjusted to maintain the same slip rate (10, 20 or 40 mm/yr). I find that a high-viscosity lower crust/uppermost mantle (or a high viscosity per unit width viscous shear zone at these depths) is required for localized and stationary interseismic deformation. For Mw = 7.9 characteristic earthquakes, the shear zone viscosity per unit width in the lower crust and uppermost mantle must exceed about 10^16 Pa s /m. For a layered viscoelastic model the lower crust and uppermost mantle effective viscosity must exceed about 10^20 Pa s. The range of admissible shear zone and lower lithosphere rheologies broadens considerably for faults producing more frequent but smaller

  10. Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.

    2012-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or

  11. Short-term earthquake forecasting based on an epidemic clustering model

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2016-04-01

    The application of rigorous statistical tools, with the aim of verifying any prediction method, requires a univocal definition of the hypothesis, or the model, characterizing the concerned anomaly or precursor, so as it can be objectively recognized in any circumstance and by any observer. This is mandatory to build up on the old-fashion approach consisting only of the retrospective anecdotic study of past cases. A rigorous definition of an earthquake forecasting hypothesis should lead to the objective identification of particular sub-volumes (usually named alarm volumes) of the total time-space volume within which the probability of occurrence of strong earthquakes is higher than the usual. The test of a similar hypothesis needs the observation of a sufficient number of past cases upon which a statistical analysis is possible. This analysis should be aimed to determine the rate at which the precursor has been followed (success rate) or not followed (false alarm rate) by the target seismic event, or the rate at which a target event has been preceded (alarm rate) or not preceded (failure rate) by the precursor. The binary table obtained from this kind of analysis leads to the definition of the parameters of the model that achieve the maximum number of successes and the minimum number of false alarms for a specific class of precursors. The mathematical tools suitable for this purpose may include the definition of Probability Gain or the R-Score, as well as the application of popular plots such as the Molchan error-diagram and the ROC diagram. Another tool for evaluating the validity of a forecasting method is the concept of the likelihood ratio (also named performance factor) of occurrence and non-occurrence of seismic events under different hypotheses. Whatever is the method chosen for building up a new hypothesis, usually based on retrospective data, the final assessment of its validity should be carried out by a test on a new and independent set of observations

  12. Redefining Earthquakes and the Earthquake Machine

    ERIC Educational Resources Information Center

    Hubenthal, Michael; Braile, Larry; Taber, John

    2008-01-01

    The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…

  13. The influence of one earthquake on another

    NASA Astrophysics Data System (ADS)

    Kilb, Deborah Lyman

    1999-12-01

    caused by fault slip) or complete (including both static and dynamic). We examine theoretically calculated Coulomb failure stress changes for the static (DeltaCFS) and complete (DeltaCFS(t)) cases, and statistically test for a correlation with spatially varying post-Landers seismicity rate changes. We find that directivity, which was required to model waveforms of the 1992 Landers earthquake, creates an asymmetry in mapped peak DeltaCFS(t). A similar asymmetry is apparent in the seismicity rate change map but not in the DeltaCFS map. Statistical analyses show that peak DeltaCFS(t) correlates as well or better with seismicity rate change as DeltaCFS, and qualitatively peak DeltaCFS(t) is the preferred model. (Abstract shortened by UMI.)

  14. ARMA models for earthquake ground motions. Seismic safety margins research program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.

    1981-02-01

    Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less

  15. Constant strain accumulation rate between major earthquakes on the North Anatolian Fault.

    PubMed

    Hussain, Ekbal; Wright, Tim J; Walters, Richard J; Bekaert, David P S; Lloyd, Ryan; Hooper, Andrew

    2018-04-11

    Earthquakes are caused by the release of tectonic strain accumulated between events. Recent advances in satellite geodesy mean we can now measure this interseismic strain accumulation with a high degree of accuracy. But it remains unclear how to interpret short-term geodetic observations, measured over decades, when estimating the seismic hazard of faults accumulating strain over centuries. Here, we show that strain accumulation rates calculated from geodetic measurements around a major transform fault are constant for its entire 250-year interseismic period, except in the ~10 years following an earthquake. The shear strain rate history requires a weak fault zone embedded within a strong lower crust with viscosity greater than ~10 20  Pa s. The results support the notion that short-term geodetic observations can directly contribute to long-term seismic hazard assessment and suggest that lower-crustal viscosities derived from postseismic studies are not representative of the lower crust at all spatial and temporal scales.

  16. Late Holocene slip rate and ages of prehistoric earthquakes along the Maacama Fault near Willits, Mendocino County, northern California

    USGS Publications Warehouse

    Prentice, Carol S.; Larsen, Martin C.; Kelsey, Harvey M.; Zachariasen, Judith

    2014-01-01

    The Maacama fault is the northward continuation of the Hayward–Rodgers Creek fault system and creeps at a rate of 5.7±0.1  mm/yr (averaged over the last 20 years) in Willits, California. Our paleoseismic studies at Haehl Creek suggest that the Maacama fault has produced infrequent large earthquakes in addition to creep. Fault terminations observed in several excavations provide evidence that a prehistoric surface‐rupturing earthquake occurred between 1060 and 1180 calibrated years (cal) B.P. at the Haehl Creek site. A folding event, which we attribute to a more recent large earthquake, occurred between 790 and 1060 cal B.P. In the last 560–690 years, a buried channel deposit has been offset 4.6±0.2  m, giving an average slip rate of 6.4–8.6  mm/yr, which is higher than the creep rate over the last 20 years. The difference between this slip rate and the creep rate suggests that coseismic slip up to 1.7 m could have occurred after the formation of the channel deposit and could be due to a paleoearthquake known from paleoseismic studies in the Ukiah Valley, about 25 km to the southeast. Therefore, we infer that at least two, and possibly three, large earthquakes have occurred at the Haehl Creek site since 1180 cal B.P. (770 C.E.), consistent with earlier studies suggesting infrequent, large earthquakes on the Maacama fault. The short‐term geodetic slip rate across the Maacama fault zone is approximately twice the slip rate that we have documented at the Haehl Creek site, which is averaged over the last approximately 600 years. If the geodetic rate represents the long‐term slip accumulation across the fault zone, then we infer that, in the last ∼1200 years, additional earthquakes may have occurred either on the Haehl Creek segment of the Maacama fault or on other active faults within the Maacama fault zone at this latitude.

  17. Modelling earthquake ruptures with dynamic off-fault damage

    NASA Astrophysics Data System (ADS)

    Okubo, Kurama; Bhat, Harsha S.; Klinger, Yann; Rougier, Esteban

    2017-04-01

    Earthquake rupture modelling has been developed for producing scenario earthquakes. This includes understanding the source mechanisms and estimating far-field ground motion with given a priori constraints like fault geometry, constitutive law of the medium and friction law operating on the fault. It is necessary to consider all of the above complexities of a fault systems to conduct realistic earthquake rupture modelling. In addition to the complexity of the fault geometry in nature, coseismic off-fault damage, which is observed by a variety of geological and seismological methods, plays a considerable role on the resultant ground motion and its spectrum compared to a model with simple planer fault surrounded by purely elastic media. Ideally all of these complexities should be considered in earthquake modelling. State of the art techniques developed so far, however, cannot treat all of them simultaneously due to a variety of computational restrictions. Therefore, we adopt the combined finite-discrete element method (FDEM), which can effectively deal with pre-existing complex fault geometry such as fault branches and kinks and can describe coseismic off-fault damage generated during the dynamic rupture. The advantage of FDEM is that it can handle a wide range of length scales, from metric to kilometric scale, corresponding to the off-fault damage and complex fault geometry respectively. We used the FDEM-based software tool called HOSSedu (Hybrid Optimization Software Suite - Educational Version) for the earthquake rupture modelling, which was developed by Los Alamos National Laboratory. We firstly conducted the cross-validation of this new methodology against other conventional numerical schemes such as the finite difference method (FDM), the spectral element method (SEM) and the boundary integral equation method (BIEM), to evaluate the accuracy with various element sizes and artificial viscous damping values. We demonstrate the capability of the FDEM tool for

  18. From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2009-12-01

    Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.

  19. 2D Simulations of Earthquake Cycles at a Subduction Zone Based on a Rate and State Friction Law -Effects of Pore Fluid Pressure Changes-

    NASA Astrophysics Data System (ADS)

    Mitsui, Y.; Hirahara, K.

    2006-12-01

    There have been a lot of studies that simulate large earthquakes occurring quasi-periodically at a subduction zone, based on the laboratory-derived rate-and-state friction law [eg. Kato and Hirasawa (1997), Hirose and Hirahara (2002)]. All of them assume that pore fluid pressure in the fault zone is constant. However, in the fault zone, pore fluid pressure changes suddenly, due to coseismic pore dilatation [Marone (1990)] and thermal pressurization [Mase and Smith (1987)]. If pore fluid pressure drops and effective normal stress rises, fault slip is decelerated. Inversely, if pore fluid pressure rises and effective normal stress drops, fault slip is accelerated. The effect of pore fluid may cause slow slip events and low-frequency tremor [Kodaira et al. (2004), Shelly et al. (2006)]. For a simple spring model, how pore dilatation affects slip instability was investigated [Segall and Rice (1995), Sleep (1995)]. When the rate of the slip becomes high, pore dilatation occurs and pore pressure drops, and the rate of the slip is restrained. Then the inflow of pore fluid recovers the pore pressure. We execute 2D earthquake cycle simulations at a subduction zone, taking into account such changes of pore fluid pressure following Segall and Rice (1995), in addition to the numerical scheme in Kato and Hirasawa (1997). We do not adopt hydrostatic pore pressure but excess pore pressure for initial condition, because upflow of dehydrated water seems to exist at a subduction zone. In our model, pore fluid is confined to the fault damage zone and flows along the plate interface. The smaller the flow rate is, the later pore pressure recovers. Since effective normal stress keeps larger, the fault slip is decelerated and stress drop becomes smaller. Therefore the smaller flow rate along the fault zone leads to the shorter earthquake recurrence time. Thus, not only the frictional parameters and the subduction rate but also the fault zone permeability affects the recurrence time of

  20. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  1. Using regional pore-fluid pressure response following the 3 Sep 2016 M­­w5.8 Pawnee, Oklahoma earthquake to constrain far-field seismicity rate forecasts

    NASA Astrophysics Data System (ADS)

    Kroll, K.; Murray, K. E.; Cochran, E. S.

    2016-12-01

    The 3 Sep 2016 M­­w5.8 Pawnee, Oklahoma earthquake was the largest event to occur in recorded history of the state. Widespread shaking from the event was felt in seven central U.S. states and caused damage as far away as Oklahoma City ( 115 km SSW). The Pawnee earthquake occurred soon after the deployment of a subsurface pore-fluid pressure monitoring network in Aug 2016. Eight pressure transducers were installed downhole in inactive saltwater disposal wells that were completed in the basal sedimentary zone (the Arbuckle Group). The transducers are located in Alfalfa, Grant, and Payne Counties at distances of 48 to 140 km from the Pawnee earthquake. We observed coseismic fluid pressure changes in all monitoring wells, indicating a large-scale poroelastic response in the Arbuckle. Two wells in Payne County lie in a zone of volumetric compression 48-52 km SSE of the rupture and experienced a co-seismic rise in fluid pressures that we conclude was related to poroelastic rebound of the Arbuckle reservoir. We compare measurements of the pore-fluid pressure change to estimated values given by the product of the volumetric strain, a Skempton's coefficient of 0.33, and a Bulk modulus of 25 GPa for fractured granitic basement rocks. We explore the possibility that the small increase in pore-fluid pressure may increase the rate of seismicity in regions outside of the mainshock region. We test this hypothesis by supplementing the Oklahoma Geological Survey earthquake catalog by semi-automated detection smaller magnitude (<2.6 M) earthquakes on seismic stations that are located in the vicinity of the wells. Using the events that occur in the week before the mainshock (27 Aug to 3 Sep 2016) as the background seismicity rate and the estimated pore-fluid pressure increase, we use a rate-state model to predict the seismicity rate change in the week following the event. We then compare the model predictions to the observed seismicity in the week following the Pawnee earthquake

  2. Dynamic Simulation of the 2011 M9.0 Tohoku Earthquake with Geometric Complexity on a Rate- and State-dependent Subduction Plane

    NASA Astrophysics Data System (ADS)

    Luo, B.; Duan, B.

    2015-12-01

    The Mw 9.0 Tohoku megathrust earthquake on 11 March 2011 is a great surprise to the scientific community due to its unexpected occurrence on the subduction zone of Japan Trench where earthquakes of magnitude ~7 to 8 are expected based on historical records. Slip distribution and kinematic slip history inverted from seismic data, GPS and tsunami recordings reveal two major aspects of this big event: a strong asperity near the hypocenter and large slip near the trench. To investigate physical conditions of these two aspects, we perform dynamic rupture simulations on a shallow-dipping rate- and state-dependent subduction plane with topographic relief. Although existence of a subducted seamount just up-dip of the hypocenter is still an open question, high Vp anomalies [Zhao et al., 2011] and low Vp/Vs anomalies [Yamamoto et al., 2014] there strongly suggest some kind of topographic relief exists there. We explicitly incorporate a subducted seamount on the subduction surface into our models. Our preliminary results show that the subducted seamount play a significant role in dynamic rupture propagation due to the alteration of the stress state around it. We find that a subducted seamount can act as a strong barrier to many earthquakes, but its ultimate failure after some earthquake cycles results in giant earthquakes. Its failure gives rise to large stress drop, resulting in a strong asperity in slip distribution as revealed in kinematic inversions. Our preliminary results also suggest that the rate- and state- friction law plays an important role in rupture propagation of geometrically complex faults. Although rate-strengthening behavior near the trench impedes rupture propagation, an energetic rupture can break such a barrier and manage to reach the trench, resulting in significant uplift at seafloor and hence devastating tsunami to human society.

  3. Earthquake Hazard and Risk in Sub-Saharan Africa: current status of the Global Earthquake model (GEM) initiative in the region

    NASA Astrophysics Data System (ADS)

    Ayele, Atalay; Midzi, Vunganai; Ateba, Bekoa; Mulabisana, Thifhelimbilu; Marimira, Kwangwari; Hlatywayo, Dumisani J.; Akpan, Ofonime; Amponsah, Paulina; Georges, Tuluka M.; Durrheim, Ray

    2013-04-01

    Large magnitude earthquakes have been observed in Sub-Saharan Africa in the recent past, such as the Machaze event of 2006 (Mw, 7.0) in Mozambique and the 2009 Karonga earthquake (Mw 6.2) in Malawi. The December 13, 1910 earthquake (Ms = 7.3) in the Rukwa rift (Tanzania) is the largest of all instrumentally recorded events known to have occurred in East Africa. The overall earthquake hazard in the region is on the lower side compared to other earthquake prone areas in the globe. However, the risk level is high enough for it to receive attention of the African governments and the donor community. The latest earthquake hazard map for the sub-Saharan Africa was done in 1999 and updating is long overdue as several development activities in the construction industry is booming allover sub-Saharan Africa. To this effect, regional seismologists are working together under the GEM (Global Earthquake Model) framework to improve incomplete, inhomogeneous and uncertain catalogues. The working group is also contributing to the UNESCO-IGCP (SIDA) 601 project and assessing all possible sources of data for the catalogue as well as for the seismotectonic characteristics that will help to develop a reasonable hazard model in the region. In the current progress, it is noted that the region is more seismically active than we thought. This demands the coordinated effort of the regional experts to systematically compile all available information for a better output so as to mitigate earthquake risk in the sub-Saharan Africa.

  4. Security Implications of Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Jha, B.; Rao, A.

    2016-12-01

    The increase in earthquakes induced or triggered by human activities motivates us to research how a malicious entity could weaponize earthquakes to cause damage. Specifically, we explore the feasibility of controlling the location, timing and magnitude of an earthquake by activating a fault via injection and production of fluids into the subsurface. Here, we investigate the relationship between the magnitude and trigger time of an induced earthquake to the well-to-fault distance. The relationship between magnitude and distance is important to determine the farthest striking distance from which one could intentionally activate a fault to cause certain level of damage. We use our novel computational framework to model the coupled multi-physics processes of fluid flow and fault poromechanics. We use synthetic models representative of the New Madrid Seismic Zone and the San Andreas Fault Zone to assess the risk in the continental US. We fix injection and production flow rates of the wells and vary their locations. We simulate injection-induced Coulomb destabilization of faults and evolution of fault slip under quasi-static deformation. We find that the effect of distance on the magnitude and trigger time is monotonic, nonlinear, and time-dependent. Evolution of the maximum Coulomb stress on the fault provides insights into the effect of the distance on rupture nucleation and propagation. The damage potential of induced earthquakes can be maintained even at longer distances because of the balance between pressure diffusion and poroelastic stress transfer mechanisms. We conclude that computational modeling of induced earthquakes allows us to measure feasibility of weaponzing earthquakes and developing effective defense mechanisms against such attacks.

  5. On near-source earthquake triggering

    USGS Publications Warehouse

    Parsons, T.; Velasco, A.A.

    2009-01-01

    When one earthquake triggers others nearby, what connects them? Two processes are observed: static stress change from fault offset and dynamic stress changes from passing seismic waves. In the near-source region (r ??? 50 km for M ??? 5 sources) both processes may be operating, and since both mechanisms are expected to raise earthquake rates, it is difficult to isolate them. We thus compare explosions with earthquakes because only earthquakes cause significant static stress changes. We find that large explosions at the Nevada Test Site do not trigger earthquakes at rates comparable to similar magnitude earthquakes. Surface waves are associated with regional and long-range dynamic triggering, but we note that surface waves with low enough frequency to penetrate to depths where most aftershocks of the 1992 M = 5.7 Little Skull Mountain main shock occurred (???12 km) would not have developed significant amplitude within a 50-km radius. We therefore focus on the best candidate phases to cause local dynamic triggering, direct waves that pass through observed near-source aftershock clusters. We examine these phases, which arrived at the nearest (200-270 km) broadband station before the surface wave train and could thus be isolated for study. Direct comparison of spectral amplitudes of presurface wave arrivals shows that M ??? 5 explosions and earthquakes deliver the same peak dynamic stresses into the near-source crust. We conclude that a static stress change model can readily explain observed aftershock patterns, whereas it is difficult to attribute near-source triggering to a dynamic process because of the dearth of aftershocks near large explosions.

  6. Ground-motion modeling of the 1906 San Francisco earthquake, part I: Validation using the 1989 Loma Prieta earthquake

    USGS Publications Warehouse

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.

    2008-01-01

    We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.

  7. Seismicity rate increases associated with slow slip episodes prior to the 2012 Mw 7.4 Ometepec earthquake

    NASA Astrophysics Data System (ADS)

    Colella, Harmony V.; Sit, Stefany M.; Brudzinski, Michael R.; Graham, Shannon E.; DeMets, Charles; Holtkamp, Stephen G.; Skoumal, Robert J.; Ghouse, Noorulann; Cabral-Cano, Enrique; Kostoglodov, Vladimir; Arciniega-Ceballos, Alejandra

    2017-04-01

    The March 20, 2012 Mw 7.4 Ometepec earthquake in the Oaxaca region of Southern Mexico provides a unique opportunity to examine whether subtle changes in seismicity, tectonic tremor, or slow slip can be observed prior to a large earthquake that may illuminate changes in stress or background slip rate. Continuous Global Positioning System (cGPS) data reveal a 5-month-long slow slip event (SSE) between ∼20 and 35 km depth that migrated toward and reached the vicinity of the mainshock a few weeks prior to the earthquake. Seismicity in Oaxaca is examined using single station tectonic tremor detection and multi-station waveform template matching of earthquake families. An increase in seismic activity, detected with template matching using aftershock waveforms, is only observed in the weeks prior to the mainshock in the region between the SSE and mainshock. In contrast, a SSE ∼15 months earlier occurred at ∼25-40 km depth and was primarily associated with an increase in tectonic tremor. Together, these observations indicate that in the Oaxaca region of Mexico shallower slow slip promotes elevated seismicity rates, and deeper slow slip promotes tectonic tremor. Results from this study add to a growing number of published accounts that indicate slow slip may be a common pre-earthquake signature.

  8. Active accommodation of plate convergence in Southern Iran: Earthquake locations, triggered aseismic slip, and regional strain rates

    NASA Astrophysics Data System (ADS)

    Barnhart, William D.; Lohman, Rowena B.; Mellors, Robert J.

    2013-10-01

    We present a catalog of interferometric synthetic aperture radar (InSAR) constraints on deformation that occurred during earthquake sequences in southern Iran between 1992 and 2011, and explore the implications on the accommodation of large-scale continental convergence between Saudi Arabia and Eurasia within the Zagros Mountains. The Zagros Mountains, a salt-laden fold-and-thrust belt involving ~10 km of sedimentary rocks overlying Precambrian basement rocks, have formed as a result of ongoing continental collision since 10-20 Ma that is currently occurring at a rate of ~3 cm/yr. We first demonstrate that there is a biased misfit in earthquake locations in global catalogs that likely results from neglect of 3-D velocity structure. Previous work involving two M ~ 6 earthquakes with well-recorded aftershocks has shown that the deformation observed with InSAR may represent triggered slip on faults much shallower than the primary earthquake, which likely occurred within the basement rocks (>10 km depth). We explore the hypothesis that most of the deformation observed with InSAR spanning earthquake sequences is also due to shallow, triggered slip above a deeper earthquake, effectively doubling the moment release for each event. We quantify the effects that this extra moment release would have on the discrepancy between seismically and geodetically constrained moment rates in the region, finding that even with the extra triggered fault slip, significant aseismic deformation during the interseismic period is necessary to fully explain the convergence between Eurasia and Saudi Arabia.

  9. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  10. Earthquake Hazard and Risk in Alaska

    NASA Astrophysics Data System (ADS)

    Black Porto, N.; Nyst, M.

    2014-12-01

    Alaska is one of the most seismically active and tectonically diverse regions in the United States. To examine risk, we have updated the seismic hazard model in Alaska. The current RMS Alaska hazard model is based on the 2007 probabilistic seismic hazard maps for Alaska (Wesson et al., 2007; Boyd et al., 2007). The 2015 RMS model will update several key source parameters, including: extending the earthquake catalog, implementing a new set of crustal faults, updating the subduction zone geometry and reoccurrence rate. First, we extend the earthquake catalog to 2013; decluster the catalog, and compute new background rates. We then create a crustal fault model, based on the Alaska 2012 fault and fold database. This new model increased the number of crustal faults from ten in 2007, to 91 faults in the 2015 model. This includes the addition of: the western Denali, Cook Inlet folds near Anchorage, and thrust faults near Fairbanks. Previously the subduction zone was modeled at a uniform depth. In this update, we model the intraslab as a series of deep stepping events. We also use the best available data, such as Slab 1.0, to update the geometry of the subduction zone. The city of Anchorage represents 80% of the risk exposure in Alaska. In the 2007 model, the hazard in Alaska was dominated by the frequent rate of magnitude 7 to 8 events (Gutenberg-Richter distribution), and large magnitude 8+ events had a low reoccurrence rate (Characteristic) and therefore didn't contribute as highly to the overall risk. We will review these reoccurrence rates, and will present the results and impact to Anchorage. We will compare our hazard update to the 2007 USGS hazard map, and discuss the changes and drivers for these changes. Finally, we will examine the impact model changes have on Alaska earthquake risk. Consider risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the

  11. How fault geometry controls earthquake magnitude

    NASA Astrophysics Data System (ADS)

    Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.

    2016-12-01

    Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.

  12. Earthquake source properties from pseudotachylite

    USGS Publications Warehouse

    Beeler, Nicholas M.; Di Toro, Giulio; Nielsen, Stefan

    2016-01-01

    The motions radiated from an earthquake contain information that can be interpreted as displacements within the source and therefore related to stress drop. Except in a few notable cases, the source displacements can neither be easily related to the absolute stress level or fault strength, nor attributed to a particular physical mechanism. In contrast paleo-earthquakes recorded by exhumed pseudotachylite have a known dynamic mechanism whose properties constrain the co-seismic fault strength. Pseudotachylite can also be used to directly address a longstanding discrepancy between seismologically measured static stress drops, which are typically a few MPa, and much larger dynamic stress drops expected from thermal weakening during localized slip at seismic speeds in crystalline rock [Sibson, 1973; McKenzie and Brune, 1969; Lachenbruch, 1980; Mase and Smith, 1986; Rice, 2006] as have been observed recently in laboratory experiments at high slip rates [Di Toro et al., 2006a]. This note places pseudotachylite-derived estimates of fault strength and inferred stress levels within the context and broader bounds of naturally observed earthquake source parameters: apparent stress, stress drop, and overshoot, including consideration of roughness of the fault surface, off-fault damage, fracture energy, and the 'strength excess'. The analysis, which assumes stress drop is related to corner frequency by the Madariaga [1976] source model, is restricted to the intermediate sized earthquakes of the Gole Larghe fault zone in the Italian Alps where the dynamic shear strength is well-constrained by field and laboratory measurements. We find that radiated energy exceeds the shear-generated heat and that the maximum strength excess is ~16 MPa. More generally these events have inferred earthquake source parameters that are rate, for instance a few percent of the global earthquake population has stress drops as large, unless: fracture energy is routinely greater than existing models allow

  13. Rapid tsunami models and earthquake source parameters: Far-field and local applications

    USGS Publications Warehouse

    Geist, E.L.

    2005-01-01

    Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.

  14. ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1986-01-01

    A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.

  15. Toward a physics-based rate and state friction law for earthquake nucleation processes in fault zones with granular gouge

    NASA Astrophysics Data System (ADS)

    Ferdowsi, B.; Rubin, A. M.

    2017-12-01

    Numerical simulations of earthquake nucleation rely on constitutive rate and state evolution laws to model earthquake initiation and propagation processes. The response of different state evolution laws to large velocity increases is an important feature of these constitutive relations that can significantly change the style of earthquake nucleation in numerical models. However, currently there is not a rigorous understanding of the physical origins of the response of bare rock or gouge-filled fault zones to large velocity increases. This in turn hinders our ability to design physics-based friction laws that can appropriately describe those responses. We here argue that most fault zones form a granular gouge after an initial shearing phase and that it is the behavior of the gouge layer that controls the fault friction. We perform numerical experiments of a confined sheared granular gouge under a range of confining stresses and driving velocities relevant to fault zones and apply 1-3 order of magnitude velocity steps to explore dynamical behavior of the system from grain- to macro-scales. We compare our numerical observations with experimental data from biaxial double-direct-shear fault gouge experiments under equivalent loading and driving conditions. Our intention is to first investigate the degree to which these numerical experiments, with Hertzian normal and Coulomb friction laws at the grain-grain contact scale and without any time-dependent plasticity, can reproduce experimental fault gouge behavior. We next compare the behavior observed in numerical experiments with predictions of the Dieterich (Aging) and Ruina (Slip) friction laws. Finally, the numerical observations at the grain and meso-scales will be used for designing a rate and state evolution law that takes into account recent advances in rheology of granular systems, including local and non-local effects, for a wide range of shear rates and slow and fast deformation regimes of the fault gouge.

  16. Slip rate and slip magnitudes of past earthquakes along the Bogd left-lateral strike-slip fault (Mongolia)

    USGS Publications Warehouse

    Rizza, M.; Ritz, J.-F.; Braucher, R.; Vassallo, R.; Prentice, C.; Mahan, S.; McGill, S.; Chauvet, A.; Marco, S.; Todbileg, M.; Demberel, S.; Bourles, D.

    2011-01-01

    We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans-particularly well preserved in the arid environment of the Gobi region-allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is ~1 mm yr-1 along the WIB and EIB segments and ~0.5 mm yr-1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78-7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of ~2500-5200 yr for past

  17. Source modeling of the 2015 Mw 7.8 Nepal (Gorkha) earthquake sequence: Implications for geodynamics and earthquake hazards

    NASA Astrophysics Data System (ADS)

    McNamara, D. E.; Yeck, W. L.; Barnhart, W. D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, A.; Hough, S. E.; Benz, H. M.; Earle, P. S.

    2017-09-01

    The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard. Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10-15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.

  18. Source modeling of the 2015 Mw 7.8 Nepal (Gorkha) earthquake sequence: Implications for geodynamics and earthquake hazards

    USGS Publications Warehouse

    McNamara, Daniel E.; Yeck, William; Barnhart, William D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, Amod; Hough, S.E.; Benz, Harley M.; Earle, Paul

    2017-01-01

    The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard.Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a ~ 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10–15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.

  19. Interactions and triggering in a 3D rate and state asperity model

    NASA Astrophysics Data System (ADS)

    Dublanchet, P.; Bernard, P.

    2012-12-01

    Precise relocation of micro-seismicity and careful analysis of seismic source parameters have progressively imposed the concept of seismic asperities embedded in a creeping fault segment as being one of the most important aspect that should appear in a realistic representation of micro-seismic sources. Another important issue concerning micro-seismic activity is the existence of robust empirical laws describing the temporal and magnitude distribution of earthquakes, such as the Omori law, the distribution of inter-event time and the Gutenberg-Richter law. In this framework, this study aims at understanding statistical properties of earthquakes, by generating synthetic catalogs with a 3D, quasi-dynamic continuous rate and state asperity model, that takes into account a realistic geometry of asperities. Our approach contrasts with ETAS models (Kagan and Knopoff, 1981) usually implemented to produce earthquake catalogs, in the sense that the non linearity observed in rock friction experiments (Dieterich, 1979) is fully taken into account by the use of rate and state friction law. Furthermore, our model differs from discrete models of faults (Ziv and Cochard, 2006) because the continuity allows us to define realistic geometries and distributions of asperities by the assembling of sub-critical computational cells that always fail in a single event. Moreover, this model allows us to adress the question of the influence of barriers and distribution of asperities on the event statistics. After recalling the main observations of asperities in the specific case of Parkfield segment of San-Andreas Fault, we analyse earthquake statistical properties computed for this area. Then, we present synthetic statistics obtained by our model that allow us to discuss the role of barriers on clustering and triggering phenomena among a population of sources. It appears that an effective size of barrier, that depends on its frictional strength, controls the presence or the absence, in the

  20. Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.

    2008-12-01

    Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.

  1. A new statistical time-dependent model of earthquake occurrence: failure processes driven by a self-correcting model

    NASA Astrophysics Data System (ADS)

    Rotondi, Renata; Varini, Elisa

    2016-04-01

    The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.

  2. New perspectives on self-similarity for shallow thrust earthquakes

    NASA Astrophysics Data System (ADS)

    Denolle, Marine A.; Shearer, Peter M.

    2016-09-01

    Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.

  3. Near-fault peak ground velocity from earthquake and laboratory data

    USGS Publications Warehouse

    McGarr, A.; Fletcher, Joe B.

    2007-01-01

    We test the hypothesis that peak ground velocity (PGV) has an upper bound independent of earthquake magnitude and that this bound is controlled primarily by the strength of the seismogenic crust. The highest PGVs, ranging up to several meters per second, have been measured at sites within a few kilometers of the causative faults. Because the database for near-fault PGV is small, we use earthquake slip models, laboratory experiments, and evidence from a mining-induced earthquake to investigate the factors influencing near-fault PGV and the nature of its scaling. For each earthquake slip model we have calculated the peak slip rates for all subfaults and then chosen the maximum of these rates as an estimate of twice the largest near-fault PGV. Nine slip models for eight earthquakes, with magnitudes ranging from 6.5 to 7.6, yielded maximum peak slip rates ranging from 2.3 to 12 m/sec with a median of 5.9 m/sec. By making several adjustments, PGVs for small earthquakes can be simulated from peak slip rates measured during laboratory stick-slip experiments. First, we adjust the PGV for differences in the state of stress (i.e., the difference between the laboratory loading stresses and those appropriate for faults at seismogenic depths). To do this, we multiply both the slip and the peak slip rate by the ratio of the effective normal stresses acting on fault planes measured at 6.8 km depth at the KTB site, Germany (deepest available in situ stress measurements), to those acting on the laboratory faults. We also adjust the seismic moment by replacing the laboratory fault with a buried circular shear crack whose radius is chosen to match the experimental unloading stiffness. An additional, less important adjustment is needed for experiments run in triaxial loading conditions. With these adjustments, peak slip rates for 10 stick-slip events, with scaled moment magnitudes from -2.9 to 1.0, range from 3.3 to 10.3 m/sec, with a median of 5.4 m/sec. Both the earthquake and

  4. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  5. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    PubMed Central

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  6. The August 2011 Virginia and Colorado Earthquake Sequences: Does Stress Drop Depend on Strain Rate?

    NASA Astrophysics Data System (ADS)

    Abercrombie, R. E.; Viegas, G.

    2011-12-01

    Our preliminary analysis of the August 2011 Virginia earthquake sequence finds the earthquakes to have high stress drops, similar to those of recent earthquakes in NE USA, while those of the August 2011 Trinidad, Colorado, earthquakes are moderate - in between those typical of interplate (California) and the east coast. These earthquakes provide an unprecedented opportunity to study such source differences in detail, and hence improve our estimates of seismic hazard. Previously, the lack of well-recorded earthquakes in the eastern USA severely limited our resolution of the source processes and hence the expected ground accelerations. Our preliminary findings are consistent with the idea that earthquake faults strengthen during longer recurrence times and intraplate faults fail at higher stress (and produce higher ground accelerations) than their interplate counterparts. We use the empirical Green's function (EGF) method to calculate source parameters for the Virginia mainshock and three larger aftershocks, and for the Trinidad mainshock and two larger foreshocks using IRIS-available stations. We select time windows around the direct P and S waves at the closest stations and calculate spectral ratios and source time functions using the multi-taper spectral approach (eg. Viegas et al., JGR 2010). Our preliminary results show that the Virginia sequence has high stress drops (~100-200 MPa, using Madariaga (1976) model), and the Colorado sequence has moderate stress drops (~20 MPa). These numbers are consistent with previous work in the regions, for example the Au Sable Forks (2002) earthquake, and the 2010 Germantown (MD) earthquake. We also calculate the radiated seismic energy and find the energy/moment ratio to be high for the Virginia earthquakes, and moderate for the Colorado sequence. We observe no evidence of a breakdown in constant stress drop scaling in this limited number of earthquakes. We extend our analysis to a larger number of earthquakes and stations

  7. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  8. Earthquake nucleation by transient deformations caused by the M = 7.9 Denali, Alaska, earthquake

    USGS Publications Warehouse

    Gomberg, J.; Bodin, P.; Larson, K.; Dragert, H.

    2004-01-01

    The permanent and dynamic (transient) stress changes inferred to trigger earthquakes are usually orders of magnitude smaller than the stresses relaxed by the earthquakes themselves, implying that triggering occurs on critically stressed faults. Triggered seismicity rate increases may therefore be most likely to occur in areas where loading rates are highest and elevated pore pressures, perhaps facilitated by high-temperature fluids, reduce frictional stresses and promote failure. Here we show that the 2002 magnitude M = 7.9 Denali, Alaska, earthquake triggered wide-spread seismicity rate increases throughout British Columbia and into the western United States. Dynamic triggering by seismic waves should be enhanced in directions where rupture directivity focuses radiated energy, and we verify this using seismic and new high-sample GPS recordings of the Denali mainshock. These observations are comparable in scale only to the triggering caused by the 1992 M = 7.4 Landers, California, earthquake, and demonstrate that Landers triggering did not reflect some peculiarity of the region or the earthquake. However, the rate increases triggered by the Denali earthquake occurred in areas not obviously tectonically active, implying that even in areas of low ambient stressing rates, faults may still be critically stressed and that dynamic triggering may be ubiquitous and unpredictable.

  9. Deviant Earthquakes: Data-driven Constraints on the Variability in Earthquake Source Properties and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel Taylor

    The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of

  10. PAGER-CAT: A composite earthquake catalog for calibrating global fatality models

    USGS Publications Warehouse

    Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.

    2009-01-01

    We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are

  11. Modelling low-frequency volcanic earthquakes in a viscoelastic medium with topography

    NASA Astrophysics Data System (ADS)

    Jousset, Philippe; Neuberg, Jürgen; Jolly, Arthur

    2004-11-01

    Magma properties are fundamental to explain the volcanic eruption style as well as the generation and propagation of seismic waves. This study focusses on magma properties and rheology and their impact on low-frequency volcanic earthquakes. We investigate the effects of anelasticity and topography on the amplitudes and spectra of synthetic low-frequency earthquakes. Using a 2-D finite-difference scheme, we model the propagation of seismic energy initiated in a fluid-filled conduit embedded in a homogeneous viscoelastic medium with topography. We model intrinsic attenuation by linear viscoelastic theory and we show that volcanic media can be approximated by a standard linear solid (SLS) for seismic frequencies above 2 Hz. Results demonstrate that attenuation modifies both amplitudes and dispersive characteristics of low-frequency earthquakes. Low frequency volcanic earthquakes are dispersive by nature; however, if attenuation is introduced, their dispersion characteristics will be altered. The topography modifies the amplitudes, depending on the position of the seismographs at the surface. This study shows that we need to take into account attenuation and topography to interpret correctly observed low-frequency volcanic earthquakes. It also suggests that the rheological properties of magmas may be constrained by the analysis of low-frequency seismograms.

  12. Earthquake triggering by seismic waves following the landers and hector mine earthquakes

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.A.; Bodin, P.; Harris, R.A.

    2001-01-01

    The proximity and similarity of the 1992, magnitude 7.3 Landers and 1999, magnitude 7.1 Hector Mine earthquakes in California permit testing of earthquake triggering hypotheses not previously possible. The Hector Mine earthquake confirmed inferences that transient, oscillatory 'dynamic' deformations radiated as seismic waves can trigger seismicity rate increases, as proposed for the Landers earthquake1-6. Here we quantify the spatial and temporal patterns of the seismicity rate changes7. The seismicity rate increase was to the north for the Landers earthquake and primarily to the south for the Hector Mine earthquake. We suggest that rupture directivity results in elevated dynamic deformations north and south of the Landers and Hector Mine faults, respectively, as evident in the asymmetry of the recorded seismic velocity fields. Both dynamic and static stress changes seem important for triggering in the near field with dynamic stress changes dominating at greater distances. Peak seismic velocities recorded for each earthquake suggest the existence of, and place bounds on, dynamic triggering thresholds. These thresholds vary from a few tenths to a few MPa in most places, depend on local conditions, and exceed inferred static thresholds by more than an order of magnitude. At some sites, the onset of triggering was delayed until after the dynamic deformations subsided. Physical mechanisms consistent with all these observations may be similar to those that give rise to liquefaction or cyclic fatigue.

  13. Significance of stress transfer in time-dependent earthquake probability calculations

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.

  14. Numerical Modeling and Forecasting of Strong Sumatra Earthquakes

    NASA Astrophysics Data System (ADS)

    Xing, H. L.; Yin, C.

    2007-12-01

    ESyS-Crustal, a finite element based computational model and software has been developed and applied to simulate the complex nonlinear interacting fault systems with the goal to accurately predict earthquakes and tsunami generation. With the available tectonic setting and GPS data around the Sumatra region, the simulation results using the developed software have clearly indicated that the shallow part of the subduction zone in the Sumatra region between latitude 6S and 2N has been locked for a long time, and remained locked even after the Northern part of the zone underwent a major slip event resulting into the infamous Boxing Day tsunami. Two strong earthquakes that occurred in the distant past in this region (between 6S and 1S) in 1797 (M8.2) and 1833 (M9.0) respectively are indicative of the high potential for very large destructive earthquakes to occur in this region with relatively long periods of quiescence in between. The results have been presented in the 5th ACES International Workshop in 2006 before the recent 2007 Sumatra earthquakes occurred which exactly fell into the predicted zone (see the following web site for ACES2006 and detailed presentation file through workshop agenda). The preliminary simulation results obtained so far have shown that there seem to be a few obvious events around the previously locked zone before it is totally ruptured, but apparently no indication of a giant earthquake similar to the 2004 M9 event in the near future which is believed to happen by several earthquake scientists. Further detailed simulations will be carried out and presented in the meeting.

  15. Earthquake triggering in the peri-adriatic regions induced by stress diffusion: insights from numerical modelling

    NASA Astrophysics Data System (ADS)

    D'Onza, F.; Viti, M.; Mantovani, E.; Albarello, D.

    2003-04-01

    earthquakes (roughly 1.5 years) can be explained by assuming that earthquake triggering is most probable when the maximum value of the strain rate reaches southern Italy and a value of 300-400 m2s-1 is assumed for the diffusivity of the model. This result implies that the possibility to explain the observed correlation as a consequence of stress diffusion depends on the reliability of the above choices. A discussion about this problem is reported. The time evolution of postseismic effects suggests that a significant far-field perturbation of velocity may persist for tens of years since the occurrence of the triggering event. For instance, the present velocity induced by the 1979 Montenegro event is comparable with the geodetic velocities observed in southern Italy.

  16. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    NASA Astrophysics Data System (ADS)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  17. Finite element models of earthquake cycles in mature strike-slip fault zones

    NASA Astrophysics Data System (ADS)

    Lynch, John Charles

    The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a

  18. Modeling temporal changes of low-frequency earthquake bursts near Parkfield, CA

    NASA Astrophysics Data System (ADS)

    Wu, C.; Daub, E. G.

    2016-12-01

    Tectonic tremor and low-frequency earthquakes (LFE) are found in the deeper crust of various tectonic environments in the last decade. LFEs are presumed to be caused by failure of deep fault patches during a slow slip event, and the long-term variation in LFE recurrence could provide crucial insight into the deep fault zone processes that may lead to future large earthquakes. However, the physical mechanisms causing the temporal changes of LFE recurrence are still under debate. In this study, we combine observations of long-term changes in LFE burst activities near Parkfield, CA with a brittle and ductile friction (BDF) model, and use the model to constrain the possible physical mechanisms causing the observed long-term changes in LFE burst activities after the 2004 M6 Parkfield earthquake. The BDF model mimics the slipping of deep fault patches by a spring-drugged block slider with both brittle and ductile friction components. We use the BDF model to test possible mechanisms including static stress imposed by the Parkfield earthquake, changes in pore pressure, tectonic force, afterslip, brittle friction strength, and brittle contact failure distance. The simulation results suggest that changes in brittle friction strength and failure distance are more likely to cause the observed changes in LFE bursts than other mechanisms.

  19. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and

  20. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  1. Finite-fault slip model of the 2016 Mw 7.5 Chiloé earthquake, southern Chile, estimated from Sentinel-1 data

    NASA Astrophysics Data System (ADS)

    Xu, Wenbin

    2017-05-01

    Subduction earthquakes have been widely studied in the Chilean subduction zone, but earthquakes occurring in its southern part have attracted less research interest primarily due to its lower rate of seismic activity. Here I use Sentinel-1 interferometric synthetic aperture radar (InSAR) data and range offset measurements to generate coseismic crustal deformation maps of the 2016 Mw 7.5 Chiloé earthquake in southern Chile. I find a concentrated crustal deformation with ground displacement of approximately 50 cm in the southern part of the Chiloé island. The best fitting fault model shows a pure thrust-fault motion on a shallow dipping plane orienting 4° NNE. The InSAR-determined moment is 2.4 × 1020 Nm with a shear modulus of 30 GPa, equivalent to Mw 7.56, which is slightly lower than the seismic moment. The model shows that the slip did not reach the trench, and it reruptured part of the fault that ruptured in the 1960 Mw 9.5 earthquake. The 2016 event has only released a small portion of the accumulated strain energy on the 1960 rupture zone, suggesting that the seismic hazard of future great earthquakes in southern Chile is high.

  2. The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.

    2011-12-01

    Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain

  3. Evaluation of CAMEL - comprehensive areal model of earthquake-induced landslides

    USGS Publications Warehouse

    Miles, S.B.; Keefer, D.K.

    2009-01-01

    A new comprehensive areal model of earthquake-induced landslides (CAMEL) has been developed to assist in planning decisions related to disaster risk reduction. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using fuzzy logic systems and geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL has been empirically evaluated with respect to disrupted landslides (Category I) using a case study of the 1989 M = 6.9 Loma Prieta, CA earthquake. In this case, CAMEL performs best in comparison to disrupted slides and falls in soil. For disrupted rock fall and slides, CAMEL's performance was slightly poorer. The model predicted a low occurrence of rock avalanches, when none in fact occurred. A similar comparison with the Loma Prieta case study was also conducted using a simplified Newmark displacement model. The area under the curve method of evaluation was used in order to draw comparisons between both models, revealing improved performance with CAMEL. CAMEL should not however be viewed as a strict alternative to Newmark displacement models. CAMEL can be used to integrate Newmark displacements with other, previously incompatible, types of knowledge. ?? 2008 Elsevier B.V.

  4. GIS-based support vector machine modeling of earthquake-triggered landslide susceptibility in the Jianjiang River watershed, China

    NASA Astrophysics Data System (ADS)

    Xu, Chong; Dai, Fuchu; Xu, Xiwei; Lee, Yuan Hsi

    2012-04-01

    Support vector machine (SVM) modeling is based on statistical learning theory. It involves a training phase with associated input and target output values. In recent years, the method has become increasingly popular. The main purpose of this study is to evaluate the mapping power of SVM modeling in earthquake triggered landslide-susceptibility mapping for a section of the Jianjiang River watershed using a Geographic Information System (GIS) software. The river was affected by the Wenchuan earthquake of May 12, 2008. Visual interpretation of colored aerial photographs of 1-m resolution and extensive field surveys provided a detailed landslide inventory map containing 3147 landslides related to the 2008 Wenchuan earthquake. Elevation, slope angle, slope aspect, distance from seismogenic faults, distance from drainages, and lithology were used as the controlling parameters. For modeling, three groups of positive and negative training samples were used in concert with four different kernel functions. Positive training samples include the centroids of 500 large landslides, those of all 3147 landslides, and 5000 randomly selected points in landslide polygons. Negative training samples include 500, 3147, and 5000 randomly selected points on slopes that remained stable during the Wenchuan earthquake. The four kernel functions are linear, polynomial, radial basis, and sigmoid. In total, 12 cases of landslide susceptibility were mapped. Comparative analyses of landslide-susceptibility probability and area relation curves show that both the polynomial and radial basis functions suitably classified the input data as either landslide positive or negative though the radial basis function was more successful. The 12 generated landslide-susceptibility maps were compared with known landslide centroid locations and landslide polygons to verify the success rate and predictive accuracy of each model. The 12 results were further validated using area-under-curve analysis. Group 3 with

  5. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  6. Crustal deformation, the earthquake cycle, and models of viscoelastic flow in the asthenosphere

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.; Kramer, M. J.

    1983-01-01

    The crustal deformation patterns associated with the earthquake cycle can depend strongly on the rheological properties of subcrustal material. Substantial deviations from the simple patterns for a uniformly elastic earth are expected when viscoelastic flow of subcrustal material is considered. The detailed description of the deformation pattern and in particular the surface displacements, displacement rates, strains, and strain rates depend on the structure and geometry of the material near the seismogenic zone. The origin of some of these differences are resolved by analyzing several different linear viscoelastic models with a common finite element computational technique. The models involve strike-slip faulting and include a thin channel asthenosphere model, a model with a varying thickness lithosphere, and a model with a viscoelastic inclusion below the brittle slip plane. The calculations reveal that the surface deformation pattern is most sensitive to the rheology of the material that lies below the slip plane in a volume whose extent is a few times the fault depth. If this material is viscoelastic, the surface deformation pattern resembles that of an elastic layer lying over a viscoelastic half-space. When the thickness or breath of the viscoelastic material is less than a few times the fault depth, then the surface deformation pattern is altered and geodetic measurements are potentially useful for studying the details of subsurface geometry and structure. Distinguishing among the various models is best accomplished by making geodetic measurements not only near the fault but out to distances equal to several times the fault depth. This is where the model differences are greatest; these differences will be most readily detected shortly after an earthquake when viscoelastic effects are most pronounced.

  7. Crustal deformation in Great California Earthquake cycles

    NASA Technical Reports Server (NTRS)

    Li, Victor C.; Rice, James R.

    1987-01-01

    A model in which coupling is described approximately through a generalized Elsasser model is proposed for computation of the periodic crustal deformation associated with repeated strike-slip earthquakes. The model is found to provide a more realistic physical description of tectonic loading than do simpler kinematic models. Parameters are chosen to model the 1857 and 1906 San Andreas ruptures, and predictions are found to be consistent with data on variations of contemporary surface strain and displacement rates as a function of distance from the 1857 and 1906 rupture traces. Results indicate that the asthenosphere appropriate to describe crustal deformation on the earthquake cycle time scale lies in the lower crust and perhaps the crust-mantle transition zone.

  8. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  9. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  10. Combination of High Rate, Real-Time GNSS and Accelerometer Observations and Rapid Seismic Event Notification for Earthquake Early Warning and Volcano Monitoring with a Focus on the Pacific Rim.

    NASA Astrophysics Data System (ADS)

    Zimakov, L. G.; Passmore, P.; Raczka, J.; Alvarez, M.; Jackson, M.

    2014-12-01

    Scientific GNSS networks are moving towards a model of real-time data acquisition, epoch-by-epoch storage integrity, and on-board real-time position and displacement calculations. This new paradigm allows the integration of real-time, high-rate GNSS displacement information with acceleration and velocity data to create very high-rate displacement records. The mating of these two instruments allows the creation of a new, very high-rate (200 sps) displacement observable that has the full-scale displacement characteristics of GNSS and high-precision dynamic motions of seismic technologies. It is envisioned that these new observables can be used for earthquake early warning studies, volcano monitoring, and critical infrastructure monitoring applications. Our presentation will focus on the characteristics of GNSS, seismic, and strong motion sensors in high dynamic environments, including historic earthquakes in Southern California and the Pacific Rim, replicated on a shake table, over a range of displacements and frequencies. We will explore the optimum integration of these sensors from a filtering perspective including simple harmonic impulses over varying frequencies and amplitudes and under the dynamic conditions of various earthquake scenarios. In addition we will discuss implementation of a Rapid Seismic Event Notification System that provides quick delivery of digital data from seismic stations to the acquisition and processing center and a full data integrity model for real-time earthquake notification that provides warning prior to significant ground shaking.

  11. Understanding earthquake from the granular physics point of view — Causes of earthquake, earthquake precursors and predictions

    NASA Astrophysics Data System (ADS)

    Lu, Kunquan; Hou, Meiying; Jiang, Zehui; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    We treat the earth crust and mantle as large scale discrete matters based on the principles of granular physics and existing experimental observations. Main outcomes are: A granular model of the structure and movement of the earth crust and mantle is established. The formation mechanism of the tectonic forces, which causes the earthquake, and a model of propagation for precursory information are proposed. Properties of the seismic precursory information and its relevance with the earthquake occurrence are illustrated, and principle of ways to detect the effective seismic precursor is elaborated. The mechanism of deep-focus earthquake is also explained by the jamming-unjamming transition of the granular flow. Some earthquake phenomena which were previously difficult to understand are explained, and the predictability of the earthquake is discussed. Due to the discrete nature of the earth crust and mantle, the continuum theory no longer applies during the quasi-static seismological process. In this paper, based on the principles of granular physics, we study the causes of earthquakes, earthquake precursors and predictions, and a new understanding, different from the traditional seismological viewpoint, is obtained.

  12. Combination of High Rate, Real-time GNSS and Accelerometer Observations - Preliminary Results Using a Shake Table and Historic Earthquake Events.

    NASA Astrophysics Data System (ADS)

    Jackson, Michael; Passmore, Paul; Zimakov, Leonid; Raczka, Jared

    2014-05-01

    One of the fundamental requirements of an Earthquake Early Warning (EEW) system (and other mission critical applications) is to quickly detect and process the information from the strong motion event, i.e. event detection and location, magnitude estimation, and the peak ground motion estimation at the defined targeted site, thus allowing the civil protection authorities to provide pre-programmed emergency response actions: Slow down or stop rapid transit trains and high-speed trains; shutoff of gas pipelines and chemical facilities; stop elevators at the nearest floor; send alarms to hospitals, schools and other civil institutions. An important question associated with the EEW system is: can we measure displacements in real time with sufficient accuracy? Scientific GNSS networks are moving towards a model of real-time data acquisition, storage integrity, and real-time position and displacement calculations. This new paradigm allows the integration of real-time, high-rate GNSS displacement information with acceleration and velocity data to create very high-rate displacement records. The mating of these two instruments allows the creation of a new, very high-rate (200 Hz) displacement observable that has the full-scale displacement characteristics of GNSS and high-precision dynamic motions of seismic technologies. It is envisioned that these new observables can be used for earthquake early warning studies and other mission critical applications, such as volcano monitoring, building, bridge and dam monitoring systems. REF TEK a Division of Trimble has developed the integrated GNSS/Accelerograph system, model 160-09SG, which consists of REF TEK's fourth generation electronics, a 147-01 high-resolution ANSS Class A accelerometer, and Trimble GNSS receiver and antenna capable of real time, on board Precise Point Positioning (PPP) techniques with satellite clock and orbit corrections delivered to the receiver directly via L-band satellite communications. The test we

  13. Slip rate and slip magnitudes of past earthquakes along the Bogd left-lateral strike-slip fault (Mongolia)

    USGS Publications Warehouse

    Prentice, Carol S.; Rizza, M.; Ritz, J.F.; Baucher, R.; Vassallo, R.; Mahan, S.

    2011-01-01

    We carried out morphotectonic studies along the left-lateral strike-slip Bogd Fault, the principal structure involved in the Gobi-Altay earthquake of 1957 December 4 (published magnitudes range from 7.8 to 8.3). The Bogd Fault is 260 km long and can be subdivided into five main geometric segments, based on variation in strike direction. West to East these segments are, respectively: the West Ih Bogd (WIB), The North Ih Bogd (NIB), the West Ih Bogd (WIB), the West Baga Bogd (WBB) and the East Baga Bogd (EBB) segments. Morphological analysis of offset streams, ridges and alluvial fans—particularly well preserved in the arid environment of the Gobi region—allows evaluation of late Quaternary slip rates along the different faults segments. In this paper, we measure slip rates over the past 200 ka at four sites distributed across the three western segments of the Bogd Fault. Our results show that the left-lateral slip rate is∼1 mm yr–1 along the WIB and EIB segments and∼0.5 mm yr–1 along the NIB segment. These variations are consistent with the restraining bend geometry of the Bogd Fault. Our study also provides additional estimates of the horizontal offset associated with the 1957 earthquake along the western part of the Bogd rupture, complementing previously published studies. We show that the mean horizontal offset associated with the 1957 earthquake decreases progressively from 5.2 m in the west to 2.0 m in the east, reflecting the progressive change of kinematic style from pure left-lateral strike-slip faulting to left-lateral-reverse faulting. Along the three western segments, we measure cumulative displacements that are multiples of the 1957 coseismic offset, which may be consistent with a characteristic slip. Moreover, using these data, we re-estimate the moment magnitude of the Gobi-Altay earthquake at Mw 7.78–7.95. Combining our slip rate estimates and the slip distribution per event we also determined a mean recurrence interval of∼2500

  14. Prediction of Strong Earthquake Ground Motion for the M=7.4 and M=7.2 1999, Turkey Earthquakes based upon Geological Structure Modeling and Local Earthquake Recordings

    NASA Astrophysics Data System (ADS)

    Gok, R.; Hutchings, L.

    2004-05-01

    We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.

  15. Application of a time-magnitude prediction model for earthquakes

    NASA Astrophysics Data System (ADS)

    An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He

    2007-06-01

    In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.

  16. Large Occurrence Patterns of New Zealand Deep Earthquakes: Characterization by Use of a Switching Poisson Model

    NASA Astrophysics Data System (ADS)

    Shaochuan, Lu; Vere-Jones, David

    2011-10-01

    The paper studies the statistical properties of deep earthquakes around North Island, New Zealand. We first evaluate the catalogue coverage and completeness of deep events according to cusum (cumulative sum) statistics and earlier literature. The epicentral, depth, and magnitude distributions of deep earthquakes are then discussed. It is worth noting that strong grouping effects are observed in the epicentral distribution of these deep earthquakes. Also, although the spatial distribution of deep earthquakes does not change, their occurrence frequencies vary from time to time, active in one period, relatively quiescent in another. The depth distribution of deep earthquakes also hardly changes except for events with focal depth less than 100 km. On the basis of spatial concentration we partition deep earthquakes into several groups—the Taupo-Bay of Plenty group, the Taranaki group, and the Cook Strait group. Second-order moment analysis via the two-point correlation function reveals only very small-scale clustering of deep earthquakes, presumably limited to some hot spots only. We also suggest that some models usually used for shallow earthquakes fit deep earthquakes unsatisfactorily. Instead, we propose a switching Poisson model for the occurrence patterns of deep earthquakes. The goodness-of-fit test suggests that the time-varying activity is well characterized by a switching Poisson model. Furthermore, detailed analysis carried out on each deep group by use of switching Poisson models reveals similar time-varying behavior in occurrence frequencies in each group.

  17. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes.

    PubMed

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities' preparedness and response capabilities and to mitigate future consequences. An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model's algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties.

  18. A fault slip model of the 2016 Meinong, Taiwan, earthquake from near-source strong motion and high-rate GPS waveforms

    NASA Astrophysics Data System (ADS)

    Rau, Ruey-Juin; Wen, Yi-Ying; Tseng, Po-Ching; Chen, Wei-Cheng; Cheu, Chi-Yu; Hsieh, Min-Che; Ching, Kuo-En

    2017-04-01

    The 6 February 2016 MW 6.5 Meinong earthquake (03:57:26.1 local time) occurred at about 30 km ESE of the Tainan city with a focal depth of 14.6 km. It is a mid-crust moderate-sized event, however, produced widespread strong shaking in the 30-km-away Tainan city and caused about 10 buildings collapsed and 117 death. Furthermore, the earthquake created a 20 x 10 km2 dome-shaped structure with a maximum uplift of 13 cm in between the epicenter and the Tainan city. We collected 81 50-Hz GPS and 130 strong motion data recorded within 60 km epicentral distances. High-rate GPS data are processed with GIPSY 6.4 and the calculated GPS displacement wavefield record section shows 40-60 cm Peak Ground Displacement (PGD) concentrated at 25-30 km WNW of the epicenter. The large PGDs correspond to 65-85 cm/sec PGV, which are significantly larger than the near-fault ground motion collected from moderate-sized earthquakes occurred worldwide. To investigate the source properties of the causative fault, considering the azimuthal coverage and data quality, we selected waveform data from 10 50-Hz GPS stations and 10 free-field 200-Hz strong motion stations to invert for the finite source parameters using the non-negative least squares approach. A bandpass filter of 0.05-0.5 Hz is applied to both high-rate GPS data and strong motion data, with sampling rate of 0.1 sec. The fault plane parameters (strike 281 degrees, dip 24 degrees) derived from Global Centroid Moment Tensor (CMT) are used in the finite fault inversion. The results of our joint GPS and strong motion data inversion indicates two major slip patches. The first large-slip patch occurred just below the hypocenter propagating westward at a 15-25 km depth range. The second high-slip patch appeared at 5-10 km depth slipping westward under the western side of the erected structure shown by InSAR image. These two large-slip patches appeared to devoid of aftershock seismicity, which concentrated mainly at the low-slip zones.

  19. Near-field postseismic deformation associated with the 1992 Landers and 1999 Hector Mine, California, earthquakes

    USGS Publications Warehouse

    Savage, J.C.; Svarc, J.L.; Prescott, W.H.

    2003-01-01

    After the Landers earthquake (Mw = 7.3, 1992.489) a linear array of 10 monuments extending about 30 km N50??E on either side of the earthquake rupture plus a nearby offtrend reference monument were surveyed frequently by GPS until 2003.2. The array also spans the rupture of the subsequent Hector Mine earthquake (Mw = 7.1, 1999.792 . The pre-Landers velocities of monuments in the array relative to interior North America were estimated from earlier trilateration and very long baseline interferometry measurements. Except at the reference monument, the post-Landers velocities of the individual monuments in the array relaxed to their preseismic values within 4 years. Following the Hector Mine earthquake the velocities of the monuments relaxed to steady rates within 1 year. Those steady rates for the east components are about equal to the pre-Landers rates as is the steady rate for the north component of the one monument east of the Hector Mine rupture. However, the steady rates for the north components of the 10 monuments west of the rupture are systematically ???10 mm yr1 larger than the pre-Landers rates. The relaxation to a steady rate is approximately exponential with decay times of 0.50 ?? 0.10 year following the Landers earthquake and 0.32 ?? 0.18 year following the Hector Mine earthquake. The postearthquake motions of the Landers array following the Landers earthquake are not well approximated by the viscoelastic-coupling model of Pollitz et al. [2000]. A similar viscoelastic-coupling model [Pollitz et al., 2001] is more successful in representing the deformation after the Hector Mine earthquake.

  20. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  1. Modeling Crustal Deformation Due to the Landers, Hector Mine Earthquakes Using the SCEC Community Fault Model

    NASA Astrophysics Data System (ADS)

    Gable, C. W.; Fialko, Y.; Hager, B. H.; Plesch, A.; Williams, C. A.

    2006-12-01

    More realistic models of crustal deformation are possible due to advances in measurements and modeling capabilities. This study integrates various data to constrain a finite element model of stress and strain in the vicinity of the 1992 Landers earthquake and the 1999 Hector Mine earthquake. The geometry of the model is designed to incorporate the Southern California Earthquake Center (SCEC), Community Fault Model (CFM) to define fault geometry. The Hector Mine fault is represented by a single surface that follows the trace of the Hector Mine fault, is vertical and has variable depth. The fault associated with the Landers earthquake is a set of seven surfaces that capture the geometry of the splays and echelon offsets of the fault. A three dimensional finite element mesh of tetrahedral elements is built that closely maintains the geometry of these fault surfaces. The spatially variable coseismic slip on faults is prescribed based on an inversion of geodetic (Synthetic Aperture Radar and Global Positioning System) data. Time integration of stress and strain is modeled with the finite element code Pylith. As a first step the methodology of incorporating all these data is described. Results of the time history of the stress and strain transfer between 1992 and 1999 are analyzed as well as the time history of deformation from 1999 to the present.

  2. Global earthquake fatalities and population

    USGS Publications Warehouse

    Holzer, Thomas L.; Savage, James C.

    2013-01-01

    Modern global earthquake fatalities can be separated into two components: (1) fatalities from an approximately constant annual background rate that is independent of world population growth and (2) fatalities caused by earthquakes with large human death tolls, the frequency of which is dependent on world population. Earthquakes with death tolls greater than 100,000 (and 50,000) have increased with world population and obey a nonstationary Poisson distribution with rate proportional to population. We predict that the number of earthquakes with death tolls greater than 100,000 (50,000) will increase in the 21st century to 8.7±3.3 (20.5±4.3) from 4 (7) observed in the 20th century if world population reaches 10.1 billion in 2100. Combining fatalities caused by the background rate with fatalities caused by catastrophic earthquakes (>100,000 fatalities) indicates global fatalities in the 21st century will be 2.57±0.64 million if the average post-1900 death toll for catastrophic earthquakes (193,000) is assumed.

  3. Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand

    NASA Astrophysics Data System (ADS)

    Francois-Holden, C.; Zhao, J.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground

  4. Combined GPS and InSAR models of postseismic deformation from the Northridge Earthquake

    NASA Technical Reports Server (NTRS)

    Donnellan, A.; Parker, J. W.; Peltzer, G.

    2002-01-01

    Models of combined Global Positioning System and Interferometric Synthetic Aperture Radar data collected in the region of the Northridge earthquake indicate that significant afterslip on the main fault occurred following the earthquake.

  5. Spatial modeling for estimation of earthquakes economic loss in West Java

    NASA Astrophysics Data System (ADS)

    Retnowati, Dyah Ayu; Meilano, Irwan; Riqqi, Akhmad; Hanifa, Nuraini Rahma

    2017-07-01

    Indonesia has a high vulnerability towards earthquakes. The low adaptive capacity could make the earthquake become disaster that should be concerned. That is why risk management should be applied to reduce the impacts, such as estimating the economic loss caused by hazard. The study area of this research is West Java. The main reason of West Java being vulnerable toward earthquake is the existence of active faults. These active faults are Lembang Fault, Cimandiri Fault, Baribis Fault, and also Megathrust subduction zone. This research tries to estimates the value of earthquakes economic loss from some sources in West Java. The economic loss is calculated by using HAZUS method. The components that should be known are hazard (earthquakes), exposure (building), and the vulnerability. Spatial modeling is aimed to build the exposure data and make user get the information easier by showing the distribution map, not only in tabular data. As the result, West Java could have economic loss up to 1,925,122,301,868,140 IDR ± 364,683,058,851,703.00 IDR, which is estimated from six earthquake sources with maximum possibly magnitude. However, the estimation of economic loss value in this research is the worst case earthquakes occurrence which is probably over-estimated.

  6. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    USGS Publications Warehouse

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global

  7. Viscoelastic-coupling model for the earthquake cycle driven from below

    USGS Publications Warehouse

    Savage, J.C.

    2000-01-01

    In a linear system the earthquake cycle can be represented as the sum of a solution which reproduces the earthquake cycle itself (viscoelastic-coupling model) and a solution that provides the driving force. We consider two cases, one in which the earthquake cycle is driven by stresses transmitted along the schizosphere and a second in which the cycle is driven from below by stresses transmitted along the upper mantle (i.e., the schizosphere and upper mantle, respectively, act as stress guides in the lithosphere). In both cases the driving stress is attributed to steady motion of the stress guide, and the upper crust is assumed to be elastic. The surface deformation that accumulates during the interseismic interval depends solely upon the earthquake-cycle solution (viscoelastic-coupling model) not upon the driving source solution. Thus geodetic observations of interseismic deformation are insensitive to the source of the driving forces in a linear system. In particular, the suggestion of Bourne et al. [1998] that the deformation that accumulates across a transform fault system in the interseismic interval is a replica of the deformation that accumulates in the upper mantle during the same interval does not appear to be correct for linear systems.

  8. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  9. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.

    2016-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as

  10. Optimized volume models of earthquake-triggered landslides

    PubMed Central

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-01-01

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212

  11. Optimized volume models of earthquake-triggered landslides.

    PubMed

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-07-12

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.

  12. Inferring rupture characteristics using new databases for 3D slab geometry and earthquake rupture models

    NASA Astrophysics Data System (ADS)

    Hayes, G. P.; Plescia, S. M.; Moore, G.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center has recently published a database of finite fault models for globally distributed M7.5+ earthquakes since 1990. Concurrently, we have also compiled a database of three-dimensional slab geometry models for all global subduction zones, to update and replace Slab1.0. Here, we use these two new and valuable resources to infer characteristics of earthquake rupture and propagation in subduction zones, where the vast majority of large-to-great-sized earthquakes occur. For example, we can test questions that are fairly prevalent in seismological literature. Do large ruptures preferentially occur where subduction zones are flat (e.g., Bletery et al., 2016)? Can `flatness' be mapped to understand and quantify earthquake potential? Do the ends of ruptures correlate with significant changes in slab geometry, and/or bathymetric features entering the subduction zone? Do local subduction zone geometry changes spatially correlate with areas of low slip in rupture models (e.g., Moreno et al., 2012)? Is there a correlation between average seismogenic zone dip, and/or seismogenic zone width, and earthquake size? (e.g., Hayes et al., 2012; Heuret et al., 2011). These issues are fundamental to the understanding of earthquake rupture dynamics and subduction zone seismogenesis, and yet many are poorly understood or are still debated in scientific literature. We attempt to address these questions and similar issues in this presentation, and show how these models can be used to improve our understanding of earthquake hazard in subduction zones.

  13. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  14. Earthquake Triggering in the September 2017 Mexican Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.

    2017-12-01

    Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress

  15. Concerns over modeling and warning capabilities in wake of Tohoku Earthquake and Tsunami

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2011-04-01

    Improved earthquake models, better tsunami modeling and warning capabilities, and a review of nuclear power plant safety are all greatly needed following the 11 March Tohoku earthquake and tsunami, according to scientists at the European Geosciences Union's (EGU) General Assembly, held 3-8 April in Vienna, Austria. EGU quickly organized a morning session of oral presentations and an afternoon panel discussion less than 1 month after the earthquake and the tsunami and the resulting crisis at Japan's Fukushima nuclear power plant, which has now been identified as having reached the same level of severity as the 1986 Chernobyl disaster. Many of the scientists at the EGU sessions expressed concern about the inability to have anticipated the size of the earthquake and the resulting tsunami, which appears likely to have caused most of the fatalities and damage, including damage to the nuclear plant.

  16. Earthquake recurrence models and occurrence probabilities of strong earthquakes in the North Aegean Trough (Greece)

    NASA Astrophysics Data System (ADS)

    Christos, Kourouklas; Eleftheria, Papadimitriou; George, Tsaklidis; Vassilios, Karakostas

    2018-06-01

    The determination of strong earthquakes' recurrence time above a predefined magnitude, associated with specific fault segments, is an important component of seismic hazard assessment. The occurrence of these earthquakes is neither periodic nor completely random but often clustered in time. This fact in connection with their limited number, due to shortage of the available catalogs, inhibits a deterministic approach for recurrence time calculation, and for this reason, application of stochastic processes is required. In this study, recurrence time determination in the area of North Aegean Trough (NAT) is developed by the application of time-dependent stochastic models, introducing an elastic rebound motivated concept for individual fault segments located in the study area. For this purpose, all the available information on strong earthquakes (historical and instrumental) with M w ≥ 6.5 is compiled and examined for magnitude completeness. Two possible starting dates of the catalog are assumed with the same magnitude threshold, M w ≥ 6.5 and divided into five data sets, according to a new segmentation model for the study area. Three Brownian Passage Time (BPT) models with different levels of aperiodicity are applied and evaluated with the Anderson-Darling test for each segment in both catalog data where possible. The preferable models are then used in order to estimate the occurrence probabilities of M w ≥ 6.5 shocks on each segment of NAT for the next 10, 20, and 30 years since 01/01/2016. Uncertainties in probability calculations are also estimated using a Monte Carlo procedure. It must be mentioned that the provided results should be treated carefully because of their dependence to the initial assumptions. Such assumptions exhibit large variability and alternative means of these may return different final results.

  17. A century of induced earthquakes in Oklahoma?

    USGS Publications Warehouse

    Hough, Susan E.; Page, Morgan T.

    2015-01-01

    Seismicity rates have increased sharply since 2009 in the central and eastern United States, with especially high rates of activity in the state of Oklahoma. Growing evidence indicates that many of these events are induced, primarily by injection of wastewater in deep disposal wells. The upsurge in activity has raised two questions: What is the background rate of tectonic earthquakes in Oklahoma? How much has the rate varied throughout historical and early instrumental times? In this article, we show that (1) seismicity rates since 2009 surpass previously observed rates throughout the twentieth century; (2) several lines of evidence suggest that most of the significant earthquakes in Oklahoma during the twentieth century were likely induced by oil production activities, as they exhibit statistically significant temporal and spatial correspondence with disposal wells, and intensity measurements for the 1952 El Reno earthquake and possibly the 1956 Tulsa County earthquake follow the pattern observed in other induced earthquakes; and (3) there is evidence for a low level of tectonic seismicity in southeastern Oklahoma associated with the Ouachita structural belt. The 22 October 1882 Choctaw Nation earthquake, for which we estimate Mw 4.8, occurred in this zone.

  18. Analysis of the burns profile and the admission rate of severely burned adult patient to the National Burn Center of Chile after the 2010 earthquake.

    PubMed

    Albornoz, Claudia; Villegas, Jorge; Sylvester, Marilu; Peña, Veronica; Bravo, Iside

    2011-06-01

    Chile is located in the Ring of Fire, in South America. An earthquake 8.8° affected 80% of the population in February 27th, 2010. This study was conducted to assess any change in burns profile caused by the earthquake. This was an ecologic study. We compared the 4 months following the earthquake in 2009 and 2010. age, TBSA, deep TBSA, agent, specific mortality rate and rate of admissions to the National burn Center of Chile. Mann-Whitney test and a Poisson regression were performed. Age, agent, TBSA and deep TBSA percentages did not show any difference. Mortality rate was lower in 2010 (0.52 versus 1.22 per 1,000,000 habitants) but no meaningful difference was found (Poisson regression p = 0.06). Admission rate was lower in 2010, 4.6 versus 5.6 per 1,000,000 habitants, but no differences were found (p = 0.26). There was not any admissions directly related to the earthquake. As we do not have incidence registries in Chile, we propose to use the rate of admission to the National Burn Reference Center as an incidence estimator. There was not any significant difference in the burn profile, probably because of the time of the earthquake (3 am). We conclude the earthquake did not affect the way the Chilean people get burned. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.

  19. Next-Day Earthquake Forecasts for California

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Jackson, D. D.; Kagan, Y. Y.

    2008-12-01

    We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquake predictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of Earthquake Predictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.

  20. Modeling of sub-ionospheric VLF signal anomalies associated with precursory effects of the latest earthquakes in Nepal

    NASA Astrophysics Data System (ADS)

    Sasmal, Sudipta; Chakrabarti, Sandip Kumar; Palit, Sourav; Chakraborty, Suman; Ghosh, Soujan; Ray, Suman

    2016-07-01

    We present the nature of perturbations in the propagation characteristics of Very Low Frequency (VLF) signals received at Ionospheric & Earthquake Research Centre (IERC) (Lat. 22.50 ^{o}N, Long. 87.48 ^{o}E) during and prior to the latest strong earthquakes in Nepal on 12 May 2015 at 12:50 pm local time (07:05 UTC) with a magnitude of 7.3 and depth 18 km at southeast of Kodari. The VLF signal emitted from JJI transmitter (22.2kHz) in Japan (Lat. 32.08 ^{o}N, Long. 130.83 ^{o}E) shows strong shifts in sunrise and sunset terminator times towards nighttime beginning three to four days prior to the earthquake. The shift in terminator times is numerically simulated using Long Wavelength Propagation Capability (LWPC) code. Electron density variation as a function of height is calculated for seismically quiet days using the Wait's exponential profile and it matches with the IRI model. The perturbed electron density is calculated using the effective reflection height (h') and sharpness parameter (β) and the rate of ionization due to earthquake is being obtained by the equation of continuity for ionospheric D-layer. We compute the ion production and recombination profiles during seismic and non-seismic conditions incorporating D-region ion chemistry processes and calculate the unperturbed and perturbed electron density profile and ionization rate at different heights which matches with the exponential profile. During the seismic condition, for both the cases, the rate of ionization and the electron density profile differ significantly from the normal values. We interpret this to be due to the seismo-ionospheric coupling processes.

  1. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes

    PubMed Central

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    Background A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities’ preparedness and response capabilities and to mitigate future consequences. Methods An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model’s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. Results the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. Conclusion The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. PMID:26959647

  2. Mechanical deformation model of the western United States instantaneous strain-rate field

    USGS Publications Warehouse

    Pollitz, F.F.; Vergnolle, M.

    2006-01-01

    We present a relationship between the long-term fault slip rates and instantaneous velocities as measured by Global Positioning System (GPS) or other geodetic measurements over a short time span. The main elements are the secularly increasing forces imposed by the bounding Pacific and Juan de Fuca (JdF) plates on the North American plate, viscoelastic relaxation following selected large earthquakes occurring on faults that are locked during their respective interseismic periods, and steady slip along creeping portions of faults in the context of a thin-plate system. In detail, the physical model allows separate treatments of faults with known geometry and slip history, faults with incomplete characterization (i.e. fault geometry but not necessarily slip history is available), creeping faults, and dislocation sources distributed between the faults. We model the western United States strain-rate field, derived from 746 GPS velocity vectors, in order to test the importance of the relaxation from historic events and characterize the tectonic forces imposed by the bounding Pacific and JdF plates. Relaxation following major earthquakes (M ??? 8.0) strongly shapes the present strain-rate field over most of the plate boundary zone. Equally important are lateral shear transmitted across the Pacific-North America plate boundary along ???1000 km of the continental shelf, downdip forces distributed along the Cascadia subduction interface, and distributed slip in the lower lithosphere. Post-earthquake relaxation and tectonic forcing, combined with distributed deep slip, constructively interfere near the western margin of the plate boundary zone, producing locally large strain accumulation along the San Andreas fault (SAF) system. However, they destructively interfere further into the plate interior, resulting in smaller and more variable strain accumulation patterns in the eastern part of the plate boundary zone. Much of the right-lateral strain accumulation along the SAF

  3. A model of return intervals between earthquake events

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Chechkin, Aleksei; Sokolov, Igor M.; Kantz, Holger

    2016-06-01

    Application of the diffusion entropy analysis and the standard deviation analysis to the time sequence of the southern California earthquake events from 1976 to 2002 uncovered scaling behavior typical for anomalous diffusion. However, the origin of such behavior is still under debate. Some studies attribute the scaling behavior to the correlations in the return intervals, or waiting times, between aftershocks or mainshocks. To elucidate a nature of the scaling, we applied specific reshulffling techniques to eliminate correlations between different types of events and then examined how it affects the scaling behavior. We demonstrate that the origin of the scaling behavior observed is the interplay between mainshock waiting time distribution and the structure of clusters of aftershocks, but not correlations in waiting times between the mainshocks and aftershocks themselves. Our findings are corroborated by numerical simulations of a simple model showing a very similar behavior. The mainshocks are modeled by a renewal process with a power-law waiting time distribution between events, and aftershocks follow a nonhomogeneous Poisson process with the rate governed by Omori's law.

  4. Applying the natural disasters vulnerability evaluation model to the March 2011 north-east Japan earthquake and tsunami.

    PubMed

    Ruiz Estrada, Mario Arturo; Yap, Su Fei; Park, Donghyun

    2014-07-01

    Natural hazards have a potentially large impact on economic growth, but measuring their economic impact is subject to a great deal of uncertainty. The central objective of this paper is to demonstrate a model--the natural disasters vulnerability evaluation (NDVE) model--that can be used to evaluate the impact of natural hazards on gross national product growth. The model is based on five basic indicators-natural hazards growth rates (αi), the national natural hazards vulnerability rate (ΩT), the natural disaster devastation magnitude rate (Π), the economic desgrowth rate (i.e. shrinkage of the economy) (δ), and the NHV surface. In addition, we apply the NDVE model to the north-east Japan earthquake and tsunami of March 2011 to evaluate its impact on the Japanese economy. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.

  5. Stress triggering of the 1999 Hector Mine earthquake by transient deformation following the 1992 Landers earthquake

    USGS Publications Warehouse

    Pollitz, F.F.; Sacks, I.S.

    2002-01-01

    The M 7.3 June 28, 1992 Landers and M 7.1 October 16, 1999 Hector Mine earthquakes, California, both right lateral strike-slip events on NNW-trending subvertical faults, occurred in close proximity in space and time in a region where recurrence times for surface-rupturing earthquakes are thousands of years. This suggests a causal role for the Landers earthquake in triggering the Hector Mine earthquake. Previous modeling of the static stress change associated with the Landers earthquake shows that the area of peak Hector Mine slip lies where the Coulomb failure stress promoting right-lateral strike-slip failure was high, but the nucleation point of the Hector Mine rupture was neutrally to weakly promoted, depending on the assumed coefficient of friction. Possible explanations that could account for the 7-year delay between the two ruptures include background tectonic stressing, dissipation of fluid pressure gradients, rate- and state-dependent friction effects, and post-Landers viscoelastic relaxation of the lower crust and upper mantle. By employing a viscoelastic model calibrated by geodetic data collected during the time period between the Landers and Hector Mine events, we calculate that postseismic relaxation produced a transient increase in Coulomb failure stress of about 0.7 bars on the impending Hector Mine rupture surface. The increase is greatest over the broad surface that includes the 1999 nucleation point and the site of peak slip further north. Since stress changes of magnitude greater than or equal to 0.1 bar are associated with documented causal fault interactions elsewhere, viscoelastic relaxation likely contributed to the triggering of the Hector Mine earthquake. This interpretation relies on the assumption that the faults occupying the central Mojave Desert (i.e., both the Landers and Hector Mine rupturing faults) were critically stressed just prior to the Landers earthquake.

  6. Oklahoma experiences largest earthquake during ongoing regional wastewater injection hazard mitigation efforts

    USGS Publications Warehouse

    Yeck, William; Hayes, Gavin; McNamara, Daniel E.; Rubinstein, Justin L.; Barnhart, William; Earle, Paul; Benz, Harley M.

    2017-01-01

    The 3 September 2016, Mw 5.8 Pawnee earthquake was the largest recorded earthquake in the state of Oklahoma. Seismic and geodetic observations of the Pawnee sequence, including precise hypocenter locations and moment tensor modeling, shows that the Pawnee earthquake occurred on a previously unknown left-lateral strike-slip basement fault that intersects the mapped right-lateral Labette fault zone. The Pawnee earthquake is part of an unprecedented increase in the earthquake rate in Oklahoma that is largely considered the result of the deep injection of waste fluids from oil and gas production. If this is, indeed, the case for the M5.8 Pawnee earthquake, then this would be the largest event to have been induced by fluid injection. Since 2015, Oklahoma has undergone wide-scale mitigation efforts primarily aimed at reducing injection volumes. Thus far in 2016, the rate of M3 and greater earthquakes has decreased as compared to 2015, while the cumulative moment—or energy released from earthquakes—has increased. This highlights the difficulty in earthquake hazard mitigation efforts given the poorly understood long-term diffusive effects of wastewater injection and their connection to seismicity.

  7. Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.

    2015-12-01

    The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.

  8. REGIONAL SEISMIC AMPLITUDE MODELING AND TOMOGRAPHY FOR EARTHQUAKE-EXPLOSION DISCRIMINATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, W R; Pasyanos, M E; Matzel, E

    2008-07-08

    We continue exploring methodologies to improve earthquake-explosion discrimination using regional amplitude ratios such as P/S in a variety of frequency bands. Empirically we demonstrate that such ratios separate explosions from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are also examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling (e. g. Ford et al 2008). For example, regional waveform modeling showsmore » strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East. Monitoring the world for potential nuclear explosions requires characterizing

  9. Dynamic models of an earthquake and tsunami offshore Ventura, California

    USGS Publications Warehouse

    Kenny J. Ryan,; Geist, Eric L.; Barall, Michael; David D. Oglesby,

    2015-01-01

    The Ventura basin in Southern California includes coastal dip-slip faults that can likely produce earthquakes of magnitude 7 or greater and significant local tsunamis. We construct a 3-D dynamic rupture model of an earthquake on the Pitas Point and Lower Red Mountain faults to model low-frequency ground motion and the resulting tsunami, with a goal of elucidating the seismic and tsunami hazard in this area. Our model results in an average stress drop of 6 MPa, an average fault slip of 7.4 m, and a moment magnitude of 7.7, consistent with regional paleoseismic data. Our corresponding tsunami model uses final seafloor displacement from the rupture model as initial conditions to compute local propagation and inundation, resulting in large peak tsunami amplitudes northward and eastward due to site and path effects. Modeled inundation in the Ventura area is significantly greater than that indicated by state of California's current reference inundation line.

  10. Very shallow source of the October 2010 Mentawai tsunami earthquake from tsunami field data and high-rate GPS

    NASA Astrophysics Data System (ADS)

    Hill, E. M.; Qiu, Q.; Borrero, J. C.; Huang, Z.; Banerjee, P.; Elosegui, P.; Fritz, H. M.; Macpherson, K. A.; Li, L.; Sieh, K. E.

    2011-12-01

    "Tsunami earthquakes," which produce very large tsunamis compared to those expected from their magnitude, have long puzzled geoscientists, in part because only a handful have occurred within the time of modern instrumentation. The Mw 7.8 Mentawai earthquake of 25 October 2010, which occurred seaward of the southern Mentawai islands of Sumatra, was such an event. This earthquake triggered a very large tsunami, causing substantial damage and 509 casualties. Detailed field surveys we conducted immediately after the earthquake reveal maximum runup in excess of 16 m. The Sumatra GPS Array (SuGAr) recorded beautiful 1-sec data for this event at sites on the nearby islands, making this the first tsunami earthquake to be recorded by a dense, high-rate, and proximal GPS network, and giving us a unique opportunity to study these rare events from a new perspective. We estimate a maximum horizontal coseismic GPS displacement of 22 cm, at a site ~50 km from the epicenter. Vertical displacements show subsidence of the islands, but are on the order of only a few cm. Comparison of coseismic offsets from 1-sec and 24-hr GPS solutions indicates that rapid afterslip following the earthquake amounts to ~30% of the displacement estimated by the 24-hr solutions. The coseismic displacements are smaller than expected, and an unconstrained inversion of the GPS displacements indicates maximum fault slip of ~90 cm. Slip of this magnitude will produce maximum seafloor uplift of <15 cm, which is clearly not enough to produce tsunami runup of 16 m. However, investigation of the model resolution from GPS indicates that we are limited in our ability to resolve slip very close to the trench. We therefore deduce that to obtain the adequate level of slip and seafloor uplift to trigger the tsunami, the rupture must have occurred outside the resolution of the GPS network, i.e., at very shallow depths close to the trench. We therefore place prior slip constraints on the GPS inversion, based on

  11. Modeling of the Nano- and Picoseismicity Rate Changes Resulting from Static Stress Triggering due to Small (MW2.2) Event Recorded at Mponeng Deep Gold Mine, South Africa

    NASA Astrophysics Data System (ADS)

    Kozlowska, M.; Orlecka-Sikora, B.; Kwiatek, G.; Boettcher, M. S.; Dresen, G. H.

    2014-12-01

    Static stress changes following large earthquakes are known to affect the rate and spatio-temporal distribution of the aftershocks. Here we utilize a unique dataset of M ≥ -3.4 earthquakes following a MW 2.2 earthquake in Mponeng gold mine, South Africa, to investigate this process for nano- and pico- scale seismicity at centimeter length scales in shallow, mining conditions. The aftershock sequence was recorded during a quiet interval in the mine and thus enabled us to perform the analysis using Dietrich's (1994) rate and state dependent friction law. The formulation for earthquake productivity requires estimation of Coulomb stress changes due to the mainshock, the reference seismicity rate, frictional resistance parameter, and the duration of aftershock relaxation time. We divided the area into six depth intervals and for each we estimated the parameters and modeled the spatio-temporal patterns of seismicity rates after the stress perturbation. Comparing the modeled patterns of seismicity with the observed distribution we found that while the spatial patterns match well, the rate of modeled aftershocks is lower than the observed rate. To test our model, we used four metrics of the goodness-of-fit evaluation. Testing procedure allowed rejecting the null hypothesis of no significant difference between seismicity rates only for one depth interval containing the mainshock, for the other, no significant differences have been found. Results show that mining-induced earthquakes may be followed by a stress relaxation expressed through aftershocks located on the rupture plane and in regions of positive Coulomb stress change. Furthermore, we demonstrate that the main features of the temporal and spatial distribution of very small, mining-induced earthquakes at shallow depths can be successfully determined using rate- and state-based stress modeling.

  12. The Global Earthquake Model and Disaster Risk Reduction

    NASA Astrophysics Data System (ADS)

    Smolka, A. J.

    2015-12-01

    Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all

  13. Dynamics of folding: Impact of fault bend folds on earthquake cycles

    NASA Astrophysics Data System (ADS)

    Sathiakumar, S.; Barbot, S.; Hubbard, J.

    2017-12-01

    Earthquakes in subduction zones and subaerial convergent margins are some of the largest in the world. So far, forecasts of future earthquakes have primarily relied on assessing past earthquakes to look for seismic gaps and slip deficits. However, the roles of fault geometry and off-fault plasticity are typically overlooked. We use structural geology (fault-bend folding theory) to inform fault modeling in order to better understand how deformation is accommodated on the geological time scale and through the earthquake cycle. Fault bends in megathrusts, like those proposed for the Nepal Himalaya, will induce folding of the upper plate. This introduces changes in the slip rate on different fault segments, and therefore on the loading rate at the plate interface, profoundly affecting the pattern of earthquake cycles. We develop numerical simulations of slip evolution under rate-and-state friction and show that this effect introduces segmentation of the earthquake cycle. In crustal dynamics, it is challenging to describe the dynamics of fault-bend folds, because the deformation is accommodated by small amounts of slip parallel to bedding planes ("flexural slip"), localized on axial surface, i.e. folding axes pinned to fault bends. We use dislocation theory to describe the dynamics of folding along these axial surfaces, using analytic solutions that provide displacement and stress kernels to simulate the temporal evolution of folding and assess the effects of folding on earthquake cycles. Studies of the 2015 Gorkha earthquake, Nepal, have shown that fault geometry can affect earthquake segmentation. Here, we show that in addition to the fault geometry, the actual geology of the rocks in the hanging wall of the fault also affect critical parameters, including the loading rate on parts of the fault, based on fault-bend folding theory. Because loading velocity controls the recurrence time of earthquakes, these two effects together are likely to have a strong impact on the

  14. A Comparison of Moment Rates for the Eastern Mediterranean Region from Competitive Kinematic Models

    NASA Astrophysics Data System (ADS)

    Klein, E. C.; Ozeren, M. S.; Shen-Tu, B.; Galgana, G. A.

    2017-12-01

    Relatively continuous, complex, and long-lived episodes of tectonic deformation gradually shaped the lithosphere of the eastern Mediterranean region into its present state. This large geodynamically interconnected and seismically active region absorbs, accumulates and transmits strains arising from stresses associated with: (1) steady northward convergence of the Arabian and African plates; (2) differences in lithospheric gravitational potential energy; and (3) basal tractions exerted by subduction along the Hellenic and Cyprus Arcs. Over the last twenty years, numerous kinematic models have been built using a variety of assumptions to take advantage of the extensive and dense GPS observations made across the entire region resulting in a far better characterization of the neotectonic deformation field than ever previously achieved. In this study, three separate horizontal strain rate field solutions obtained from three, region-wide, GPS only based kinematic models (i.e., a regional block model, a regional continuum model, and global continuum model) are utilized to estimate the distribution and uncertainty of geodetic moment rates within the eastern Mediterranean region. The geodetic moment rates from each model are also compared with seismic moment release rates gleaned from historic earthquake data. Moreover, kinematic styles of deformation derived from each of the modeled horizontal strain rate fields are examined for their degree of correlation with earthquake rupture styles defined by proximal centroid moment tensor solutions. This study suggests that significant differences in geodetically obtained moment rates from competitive kinematic models may introduce unforeseen bias into regularly updated, geodetically constrained, regional seismic hazard assessments.

  15. Depth variations of friction rate parameter derived from dynamic modeling of GPS afterslip associated with the 2003 Mw 6.5 Chengkung earthquake in eastern Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, J. C.; Liu, Z. Y. C.; Shirzaei, M.

    2016-12-01

    The Chihshang fault lies at the plate suture between the Eurasian and the Philippine Sea plates along the Longitudinal Valley in eastern Taiwan. Here we investigate depth variation of fault frictional parameters derived from the post-seismic slip model of the 2003 Mw 6.5 Chengkung earthquake. Assuming a rate-strengthening friction, we implement an inverse dynamic modeling scheme to estimate the frictional parameter (a-b) and reference friction coefficient (μ*) in depths by taking into account: pre-seismic stress as well as co-seismic and post-seismic coulomb stress changes associated with the 2003 Chengkung earthquake. We investigate two coseismic models by Hsu et al. (2009) and Thomas et al. (2014). Model parameters, including stress gradient, depth dependent a-b and μ*, are determined from fitting the transient post-seismic geodetic signal measured at 12 continuous GPS stations. In our inversion scheme, we apply a non-linear optimization algorithm, Genetic Algorithm (GA), to search for the optimum frictional parameters. Considering the zone with velocity-strengthening frictional properties along Chihshang fault, the optimum a-b is 7-8 × 10-3 along the shallow part of the fault (0-10 km depth) and 1-2 × 10-2 in 22-28 km depth. Optimum solution for μ* is 0.3-0.4 in 0-10 km depth and reaches 0.8 in 22-28 km depth. The optimized stress gradient is 54 MPa/ km. The inferred frictional parameters are consistent with the laboratory measurements on clay-rich fault zone gouges comparable to the Lichi Melange, which is thrust over Holocene alluvial deposits across the Chihshang fault, considering the main rock composition of the Chihshang fault, at least at the upper kilometers level of the fault. Our results can facilitate further studies in particular on seismic cycle and hazard assessment of active faults.

  16. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2011-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  17. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2013-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  18. Improving vulnerability models: lessons learned from a comparison between flood and earthquake assessments

    NASA Astrophysics Data System (ADS)

    de Ruiter, Marleen; Ward, Philip; Daniell, James; Aerts, Jeroen

    2017-04-01

    In a cross-discipline study, an extensive literature review has been conducted to increase the understanding of vulnerability indicators used in both earthquake- and flood vulnerability assessments, and to provide insights into potential improvements of earthquake and flood vulnerability assessments. It identifies and compares indicators used to quantitatively assess earthquake and flood vulnerability, and discusses their respective differences and similarities. Indicators have been categorized into Physical- and Social categories, and further subdivided into (when possible) measurable and comparable indicators. Physical vulnerability indicators have been differentiated to exposed assets such as buildings and infrastructure. Social indicators are grouped in subcategories such as demographics, economics and awareness. Next, two different vulnerability model types have been described that use these indicators: index- and curve-based vulnerability models. A selection of these models (e.g. HAZUS) have been described, and compared on several characteristics such as temporal- and spatial aspects. It appears that earthquake vulnerability methods are traditionally strongly developed towards physical attributes at an object scale and used in vulnerability curve models, whereas flood vulnerability studies focus more on indicators applied to aggregated land-use scales. Flood risk studies could be improved using approaches from earthquake studies, such as incorporating more detailed lifeline and building indicators, and developing object-based vulnerability curve assessments of physical vulnerability, for example by defining building material based flood vulnerability curves. Related to this, is the incorporation of time of the day based building occupation patterns (at 2am most people will be at home while at 2pm most people will be in the office). Earthquake assessments could learn from flood studies when it comes to the refined selection of social vulnerability indicators

  19. Sensitivity analysis of the FEMA HAZUS-MH MR4 Earthquake Model using seismic events affecting King County Washington

    NASA Astrophysics Data System (ADS)

    Neighbors, C.; Noriega, G. R.; Caras, Y.; Cochran, E. S.

    2010-12-01

    HAZUS-MH MR4 (HAZards U. S. Multi-Hazard Maintenance Release 4) is a risk-estimation software developed by FEMA to calculate potential losses due to natural disasters. Federal, state, regional, and local government use the HAZUS-MH Earthquake Model for earthquake risk mitigation, preparedness, response, and recovery planning (FEMA, 2003). In this study, we examine several parameters used by the HAZUS-MH Earthquake Model methodology to understand how modifying the user-defined settings affect ground motion analysis, seismic risk assessment and earthquake loss estimates. This analysis focuses on both shallow crustal and deep intraslab events in the American Pacific Northwest. Specifically, the historic 1949 Mw 6.8 Olympia, 1965 Mw 6.6 Seattle-Tacoma and 2001 Mw 6.8 Nisqually normal fault intraslab events and scenario large-magnitude Seattle reverse fault crustal events are modeled. Inputs analyzed include variations of deterministic event scenarios combined with hazard maps and USGS ShakeMaps. This approach utilizes the capacity of the HAZUS-MH Earthquake Model to define landslide- and liquefaction- susceptibility hazards with local groundwater level and slope stability information. Where Shakemap inputs are not used, events are run in combination with NEHRP soil classifications to determine site amplification effects. The earthquake component of HAZUS-MH applies a series of empirical ground motion attenuation relationships developed from source parameters of both regional and global historical earthquakes to estimate strong ground motion. Ground motion and resulting ground failure due to earthquakes are then used to calculate, direct physical damage for general building stock, essential facilities, and lifelines, including transportation systems and utility systems. Earthquake losses are expressed in structural, economic and social terms. Where available, comparisons between recorded earthquake losses and HAZUS-MH earthquake losses are used to determine how region

  20. One-dimensional velocity model of the Middle Kura Depresion from local earthquakes data of Azerbaijan

    NASA Astrophysics Data System (ADS)

    Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.

    2011-09-01

    We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.

  1. Transportations Systems Modeling and Applications in Earthquake Engineering

    DTIC Science & Technology

    2010-07-01

    49 Figure 6 PGA map of a M7.7 earthquake on all three New Madrid fault segments (g)............... 50...Memphis, Tennessee. The NMSZ was responsible for the devastating 1811-1812 New Madrid earthquakes , the largest earthquakes ever recorded in the...Figure 6 PGA map of a M7.7 earthquake on all three New Madrid fault segments (g) Table 1 Fragility parameters for MSC steel bridge (Padgett 2007

  2. Learning from physics-based earthquake simulators: a minimal approach

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2017-04-01

    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  3. Changes in suicide rates in disaster-stricken areas following the Great East Japan Earthquake and their effect on economic factors: an ecological study.

    PubMed

    Orui, Masatsugu; Harada, Shuichiro; Hayashi, Mizuho

    2014-11-01

    Devastating disasters may increase suicide rates due to mental distress. Previous domestic studies have reported decreased suicide rates among men following disasters. Few reports are available regarding factors associated with disasters, making it difficult to discuss how these events affect suicide rates. This study aimed to observe changes in suicide rates in disaster-stricken and neighboring areas following the Great East Japan Earthquake, and examine associations between suicide rates and economic factors. Monthly suicide rates were observed from March 2009 to February 2013, during which time the earthquake occurred on March, 2011. Data were included from disaster-stricken (Iwate, Miyagi, and Fukushima Prefectures) and neighboring (control: Aomori, Akita, and Yamagata Prefectures) areas. The association between changes in suicide rates and economic variables was evaluated based on the number of bankruptcy cases and ratio of effective job offers. In disaster-stricken areas, post-disaster male suicide rates decreased during the 24 months following the earthquake. This trend differed relative to control areas. Female suicide rates increased during the first seven months. Multiple regression analysis showed that bankruptcy cases (β = 0.386, p = 0.038) and ratio of effective job offers (β = -0.445, p = 0.018) were only significantly associated with male post-disaster suicide rates in control areas. Post-disaster suicide rates differed by gender following the earthquake. Our findings suggest that considering gender differences might be important for developing future post-disaster suicide prevention measures. This ecological study revealed that increasing effective job offers and decreasing bankruptcy cases can affect protectively male suicide rates in control areas.

  4. Aseismic and seismic slip induced by fluid injection from poroelastic and rate-state friction modeling

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Deng, K.; Harrington, R. M.; Clerc, F.

    2016-12-01

    Solid matrix stress change and pore pressure diffusion caused by fluid injection has been postulated as key factors for inducing earthquakes and aseismic slip on pre-existing faults. In this study, we have developed a numerical model that simulates aseismic and seismic slip in a rate-and-state friction framework with poroelastic stress perturbations from multi-stage hydraulic fracturing scenarios. We apply the physics-based model to the 2013-2015 earthquake sequences near Fox Creek, Alberta, Canada, where three magnitude 4.5 earthquakes were potentially induced by nearby hydraulic fracturing activity. In particular, we use the relocated December 2013 seismicity sequence to approximate the fault orientation, and find the seismicity migration spatiotemporally correlate with the positive Coulomb stress changes calculated from the poroelastic model. When the poroelastic stress changes are introduced to the rate-state friction model, we find that slip on the fault evolves from aseismic to seismic in a manner similar to the onset of seismicity. For a 15-stage hydraulic fracturing that lasted for 10 days, modeled fault slip rate starts to accelerate after 3 days of fracking, and rapidly develops into a seismic event, which also temporally coincides with the onset of induced seismicity. The poroelastic stress perturbation and consequently fault slip rate continue to evolve and remain high for several weeks after hydraulic fracturing has stopped, which may explain the continued seismicity after shut-in. In a comparison numerical experiment, fault slip rate quickly decreases to the interseismic level when stress perturbations are instantaneously returned to zero at shut-in. Furthermore, when stress perturbations are removed just a few hours after the fault slip rate starts to accelerate (that is, hydraulic fracturing is shut down prematurely), only aseismic slip is observed in the model. Our preliminary results thus suggest the design of fracturing duration and flow

  5. Holocene slip rates along the San Andreas Fault System in the San Gorgonio Pass and implications for large earthquakes in southern California

    NASA Astrophysics Data System (ADS)

    Heermance, Richard V.; Yule, Doug

    2017-06-01

    The San Gorgonio Pass (SGP) in southern California contains a 40 km long region of structural complexity where the San Andreas Fault (SAF) bifurcates into a series of oblique-slip faults with unknown slip history. We combine new 10Be exposure ages (Qt4: 8600 (+2100, -2200) and Qt3: 5700 (+1400, -1900) years B.P.) and a radiocarbon age (1260 ± 60 years B.P.) from late Holocene terraces with scarp displacement of these surfaces to document a Holocene slip rate of 5.7 (+2.7, -1.5) mm/yr combined across two faults. Our preferred slip rate is 37-49% of the average slip rates along the SAF outside the SGP (i.e., Coachella Valley and San Bernardino sections) and implies that strain is transferred off the SAF in this area. Earthquakes here most likely occur in very large, throughgoing SAF events at a lower recurrence than elsewhere on the SAF, so that only approximately one third of SAF ruptures penetrate or originate in the pass.Plain Language SummaryHow large are <span class="hlt">earthquakes</span> on the southern San Andreas Fault? The answer to this question depends on whether or not the <span class="hlt">earthquake</span> is contained only along individual fault sections, such as the Coachella Valley section north of Palm Springs, or the rupture crosses multiple sections including the area through the San Gorgonio Pass. We have determined the age and offset of faulted stream deposits within the San Gorgonio Pass to document slip <span class="hlt">rates</span> of these faults over the last 10,000 years. Our results indicate a long-term slip <span class="hlt">rate</span> of 6 mm/yr, which is almost 1/2 of the <span class="hlt">rates</span> east and west of this area. These new <span class="hlt">rates</span>, combined with faulted geomorphic surfaces, imply that large magnitude <span class="hlt">earthquakes</span> must occasionally rupture a 300 km length of the San Andreas Fault from the Salton Sea to the Mojave Desert. Although many ( 65%) <span class="hlt">earthquakes</span> along the southern San Andreas Fault likely do not rupture through the pass, our new results suggest that large >Mw 7.5 <span class="hlt">earthquakes</span> are possible</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70017905','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70017905"><span>Geodetic slip <span class="hlt">rate</span> for the eastern California shear zone and the recurrence time of Mojave desert <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Sauber, J.; Thatcher, W.; Solomon, S.C.; Lisowski, M.</p> <p>1994-01-01</p> <p>Where the San Andreas fault passes along the southwestern margin of the Mojave desert, it exhibits a large change in trend, and the deformation associated with the Pacific/North American plate boundary is distributed broadly over a complex shear zone. The importance of understanding the partitioning of strain across this region, especially to the east of the Mojave segment of the San Andreas in a region known as the eastern California shear zone (ECSZ), was highlighted by the occurrence (on 28 June 1992) of the magnitude 7.3 Landers <span class="hlt">earthquake</span> in this zone. Here we use geodetic observations in the central Mojave desert to obtain new estimates for the <span class="hlt">rate</span> and distribution of strain across a segment of the ECSZ, and to determine a coseismic strain drop of ~770 ??rad for the Landers <span class="hlt">earthquake</span>. From these results we infer a strain energy recharge time of 3,500-5,000 yr for a Landers-type <span class="hlt">earthquake</span> and a slip <span class="hlt">rate</span> of ~12 mm yr-1 across the faults of the central Mojave. The latter estimate implies that a greater fraction of plate motion than heretofore inferred from geodetic data is accommodated across the ECSZ.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.T13E..02V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.T13E..02V"><span>Aseismic Slip Throughout the <span class="hlt">Earthquake</span> Cycle in Nicoya Peninsula, Costa Rica</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Voss, N. K.; Liu, Z.; Hobbs, T. E.; Schwartz, S. Y.; Malservisi, R.; Dixon, T. H.; Protti, M.</p> <p>2017-12-01</p> <p>Geodetically resolved Slow Slip Events (SSE), a large M7.6 <span class="hlt">earthquake</span>, and afterslip have all been documented in the last 16 years of observation in Nicoya, Costa Rica. We present a synthesis of the observations of observed aseismic slip behavior. SSEs in Nicoya are observed both during the late inter-seismic period and the post-seismic period, despite ongoing post-seismic phenomena. While recurrence <span class="hlt">rates</span> appear unchanged by position within <span class="hlt">earthquake</span> cycle, SSE behavior does vary before and after the event. We discuss how afterslip may be responsible for this change in behavior. We also present observations of a pre-<span class="hlt">earthquake</span> transient observed starting 6 months prior to the M7.6 megathrust <span class="hlt">earthquake</span>. This <span class="hlt">earthquake</span> takes place within an asperity that is surrounded by regions which previously underwent slow slip behavior. We compare how this pre-<span class="hlt">earthquake</span> transient, <span class="hlt">modeled</span> as aseismic slip, differs from observations of typical Nicoya SSEs. Finally, we attempt to explain the segmentation of behaviors in Costa Rica with a simple frictional <span class="hlt">model</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFM.G43B0920K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFM.G43B0920K"><span>A New Global Geodetic Strain <span class="hlt">Rate</span> <span class="hlt">Model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kreemer, C. W.; Klein, E. C.; Blewitt, G.; Shen, Z.; Wang, M.; Chamot-Rooke, N. R.; Rabaute, A.</p> <p>2012-12-01</p> <p>As part of the Global <span class="hlt">Earthquake</span> <span class="hlt">Model</span> (GEM) effort to improve global seismic hazard <span class="hlt">models</span>, we present a new global geodetic strain <span class="hlt">rate</span> <span class="hlt">model</span>. This <span class="hlt">model</span> (GSRM v. 2) is a vast improvement on the previous <span class="hlt">model</span> from 2004 (v. 1.2). The <span class="hlt">model</span> is still based on a finite-element type approach and has deforming cells in between the assumed rigid plates. While v.1.2 contained ~25,000 deforming cells of 0.6° by 0.5° dimension, the new <span class="hlt">models</span> contains >136,000 cells of 0.25° by 0.2° dimension. We redefined the geometries of the deforming zones based on the definitions of Bird (2003) and Chamot-Rooke and Rabaute (2006). We made some adjustments to the grid geometry at places where seismicity and/or GPS velocities suggested the presence of deforming areas where those previous studies did not. As a result, some plates/blocks identified by Bird (2003) we assumed to deform, and the total number of plates and blocks in GSRM v.2 is 38 (including the Bering block, which Bird (2003) did not consider). GSRM v.1.2 was based on ~5,200 GPS velocities, taken from 86 studies. The new <span class="hlt">model</span> is based on ~17,000 GPS velocities, taken from 170 studies. The GPS velocity field consists of a 1) ~4900 velocities derived by us for CPS stations publicly available RINEX data and >3.5 years of data, 2) ~1200 velocities for China from a new analysis of all CMONOC data, and 3) velocities published in the literature or made otherwise available to us. All studies were combined into the same reference frame by a 6-parameter transformation using velocities at collocated stations. Because the goal of the project is to <span class="hlt">model</span> the interseismic strain <span class="hlt">rate</span> field, we <span class="hlt">model</span> co-seismic jumps while estimating velocities, ignore periods of post-seismic deformation, and exclude time-series that reflect magmatic and anthropogenic activity. GPS velocities were used to estimate angular velocities for most of the 38 rigid plates and blocks (the rest being taken from the literature), and these were used as boundary</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70030112','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70030112"><span>Coseismic source <span class="hlt">model</span> of the 2003 Mw 6.8 Chengkung <span class="hlt">earthquake</span>, Taiwan, determined from GPS measurements</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Ching, K.-E.; Rau, R.-J.; Zeng, Y.</p> <p>2007-01-01</p> <p>A coseismic source <span class="hlt">model</span> of the 2003 Mw 6.8 Chengkung, Taiwan, <span class="hlt">earthquake</span> was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation <span class="hlt">model</span>. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic <span class="hlt">model</span> tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip <span class="hlt">earthquakes</span> and a compilation of finite source <span class="hlt">models</span> of 18 <span class="hlt">earthquakes</span>, the Chengkung <span class="hlt">earthquake</span> generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung <span class="hlt">earthquake</span> seems to be a transitional example between regular and slow slip <span class="hlt">earthquakes</span>. The coseismic source <span class="hlt">model</span> of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 <span class="hlt">earthquake</span> was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMMR42A..05M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMMR42A..05M"><span>A Fluid-driven <span class="hlt">Earthquake</span> Cycle, Omori's Law, and Fluid-driven Aftershocks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miller, S. A.</p> <p>2015-12-01</p> <p>Few <span class="hlt">models</span> exist that predict the Omori's Law of aftershock <span class="hlt">rate</span> decay, with <span class="hlt">rate</span>-state friction the only physically-based <span class="hlt">model</span>. ETAS is a probabilistic <span class="hlt">model</span> of cascading failures, and is sometimes used to infer <span class="hlt">rate</span>-state frictional properties. However, the (perhaps dominant) role of fluids in the <span class="hlt">earthquake</span> process is being increasingly realised, so a fluid-based physical <span class="hlt">model</span> for Omori's Law should be available. In this talk, I present an hypothesis for a fluid-driven <span class="hlt">earthquake</span> cycle where dehydration and decarbonization at depth provides continuous sources of buoyant high pressure fluids that must eventually make their way back to the surface. The natural pathway for fluid escape is along plate boundaries, where in the ductile regime high pressure fluids likely play an integral role in episodic tremor and slow slip <span class="hlt">earthquakes</span>. At shallower levels, high pressure fluids pool at the base of seismogenic zones, with the reservoir expanding in scale through the <span class="hlt">earthquake</span> cycle. Late in the cycle, these fluids can invade and degrade the strength of the brittle crust and contribute to <span class="hlt">earthquake</span> nucleation. The mainshock opens permeable networks that provide escape pathways for high pressure fluids and generate aftershocks along these flow paths, while creating new pathways by the aftershocks themselves. Thermally activated precipitation then seals up these pathways, returning the system to a low-permeability environment and effective seal during the subsequent tectonic stress buildup. I find that the multiplicative effect of an exponential dependence of permeability on the effective normal stress coupled with an Arrhenius-type, thermally activated exponential reduction in permeability results in Omori's Law. I simulate this scenario using a very simple <span class="hlt">model</span> that combines non-linear diffusion and a step-wise increase in permeability when a Mohr Coulomb failure condition is met, and allow permeability to decrease as an exponential function in time. I show very</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15791246','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15791246"><span>Foreshock sequences and short-term <span class="hlt">earthquake</span> predictability on East Pacific Rise transform faults.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McGuire, Jeffrey J; Boettcher, Margaret S; Jordan, Thomas H</p> <p>2005-03-24</p> <p>East Pacific Rise transform faults are characterized by high slip <span class="hlt">rates</span> (more than ten centimetres a year), predominantly aseismic slip and maximum <span class="hlt">earthquake</span> magnitudes of about 6.5. Using recordings from a hydroacoustic array deployed by the National Oceanic and Atmospheric Administration, we show here that East Pacific Rise transform faults also have a low number of aftershocks and high foreshock <span class="hlt">rates</span> compared to continental strike-slip faults. The high ratio of foreshocks to aftershocks implies that such transform-fault seismicity cannot be explained by seismic triggering <span class="hlt">models</span> in which there is no fundamental distinction between foreshocks, mainshocks and aftershocks. The foreshock sequences on East Pacific Rise transform faults can be used to predict (retrospectively) <span class="hlt">earthquakes</span> of magnitude 5.4 or greater, in narrow spatial and temporal windows and with a high probability gain. The predictability of such transform <span class="hlt">earthquakes</span> is consistent with a <span class="hlt">model</span> in which slow slip transients trigger <span class="hlt">earthquakes</span>, enrich their low-frequency radiation and accommodate much of the aseismic plate motion.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920035260&hterms=Crustal+tectonics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DCrustal%2Btectonics','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920035260&hterms=Crustal+tectonics&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DCrustal%2Btectonics"><span><span class="hlt">Models</span> of recurrent strike-slip <span class="hlt">earthquake</span> cycles and the state of crustal stress</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.</p> <p>1991-01-01</p> <p>Numerical <span class="hlt">models</span> of the strike-slip <span class="hlt">earthquake</span> cycle, assuming a viscoelastic asthenosphere coupling <span class="hlt">model</span>, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and <span class="hlt">earthquake</span> recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic <span class="hlt">earthquake</span> cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The <span class="hlt">models</span> further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. <span class="hlt">Models</span> incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. <span class="hlt">Model</span> results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault <span class="hlt">models</span>, in agreement with previous estimates based on heat flow measurements and other stress indicators.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.S43A2033S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.S43A2033S"><span>Anomalies of rupture velocity in deep <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Suzuki, M.; Yagi, Y.</p> <p>2010-12-01</p> <p>Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence <span class="hlt">rate</span> of <span class="hlt">earthquakes</span> with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep <span class="hlt">earthquakes</span> are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source <span class="hlt">models</span> of deep <span class="hlt">earthquakes</span>, the source parameters for individual deep <span class="hlt">earthquakes</span> are quite varied [Frohlich, 2006]. Rupture velocities for deep <span class="hlt">earthquakes</span> estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow <span class="hlt">earthquakes</span>. The uncertainty of seismic source <span class="hlt">models</span> prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep <span class="hlt">earthquakes</span>. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep <span class="hlt">earthquakes</span> (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source <span class="hlt">models</span> of deep <span class="hlt">earthquakes</span>. By imaging the seismic rupture process for a set of recent deep <span class="hlt">earthquakes</span>, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70024161','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70024161"><span>Global Omori law decay of triggered <span class="hlt">earthquakes</span>: Large aftershocks outside the classical aftershock zone</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Parsons, T.</p> <p>2002-01-01</p> <p>Triggered <span class="hlt">earthquakes</span> can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 <span class="hlt">earthquakes</span> in El Salvador. In this study, <span class="hlt">earthquakes</span> with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are <span class="hlt">modeled</span> as dislocations to calculate shear stress changes on subsequent <span class="hlt">earthquake</span> rupture planes near enough to be affected. About 61% of <span class="hlt">earthquakes</span> that occured near (defined as having shear stress change |Δ| 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If <span class="hlt">earthquakes</span> associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered <span class="hlt">earthquakes</span> obey an Omori law <span class="hlt">rate</span> decay that lasts between ~7-11 years after the main shock. <span class="hlt">Earthquakes</span> associated with calculated shear stress increases occur at higher <span class="hlt">rates</span> than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of <span class="hlt">earthquakes</span>. If large triggered <span class="hlt">earthquakes</span> habitually obey Omori's law, then their hazard can be more readily assessed. The characteristics <span class="hlt">rate</span> change with time and spatial distribution can be used to rapidly assess the likelihood of triggered <span class="hlt">earthquakes</span> following events of Ms ≥7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador <span class="hlt">earthquake</span> where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFM.T13C2621Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFM.T13C2621Y"><span><span class="hlt">Earthquake</span> forecasting test for Kanto district to reduce vulnerability of urban mega <span class="hlt">earthquake</span> disasters</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.</p> <p>2012-12-01</p> <p>Collaboratory for the Study of <span class="hlt">Earthquake</span> Predictability (CSEP) is a global project on <span class="hlt">earthquake</span> predictability research. The final goal of this project is to search for the intrinsic predictability of the <span class="hlt">earthquake</span> rupture process through forecast testing experiments. The <span class="hlt">Earthquake</span> Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing <span class="hlt">earthquake</span> forecast <span class="hlt">models</span> applied to Japan. Now more than 100 <span class="hlt">earthquake</span> forecast <span class="hlt">models</span> were submitted on the prospective experiment. The <span class="hlt">models</span> are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the <span class="hlt">models</span> in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting <span class="hlt">models</span>. We started a study for constructing a 3-dimensional <span class="hlt">earthquake</span> forecasting <span class="hlt">model</span> for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega <span class="hlt">Earthquake</span> Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop <span class="hlt">models</span> for forecasting based on the results of 2-D <span class="hlt">modeling</span>. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K <span class="hlt">model</span> (Nanjo, 2011) and the HISTETAS <span class="hlt">models</span> (Ogata, 2011) to know if those <span class="hlt">models</span> have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNH33A0239F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNH33A0239F"><span>Very High-<span class="hlt">rate</span> (50 Hz) GPS for Detection of <span class="hlt">Earthquake</span> Ground Motions : How High Do We Need to Go?</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fang, R.</p> <p>2017-12-01</p> <p>The GPS variometric approach can measure displacements using broadcast ephemeris and a single receiver, with comparable precision to relative positioning and PPP within a short period of time. We evaluate the performance of the variometric approach to measure displacements using very high-<span class="hlt">rate</span> (50 Hz) GPS data, which recorded from the 2013 Mw 6.6 Lushan <span class="hlt">earthquake</span> and the 2011 Mw 9.0 Tohoku-Oki <span class="hlt">earthquake</span>. To remove the nonlinear drift due to integration process, we present to apply a high-pass filter to reconstruct displacements using the variometric approach. Comparison between 50 Hz and 1 Hz coseismic displacements demonstrates that 1 Hz solutions often fail to faithfully manifest the seismic waves containing high-frequency (> 0.5 Hz) seismic signals, which is common for near-field stations during a moderate-magnitude <span class="hlt">earthquake</span>. Therefore, in order to reconstruct near-field seismic waves caused by moderate or large <span class="hlt">earthquakes</span>, it is helpful to equip monitoring stations with very high-<span class="hlt">rate</span> GPS receivers. Results derived using the variometric approach are compared with PPP results. They display very good consistence within only a few millimeters both in static and seismic periods. High-frequency (above 10 Hz) noises of displacements derived using the variometric approach are smaller than PPP displacements in three components.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EP%26S...70...92K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EP%26S...70...92K"><span>Kinetic effect of heating <span class="hlt">rate</span> on the thermal maturity of carbonaceous material as an indicator of frictional heat during <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kaneki, Shunya; Hirono, Tetsuro</p> <p>2018-06-01</p> <p>Because the maximum temperature reached in the slip zone is significant information for understanding slip behaviors during an <span class="hlt">earthquake</span>, the maturity of carbonaceous material (CM) is widely used as a proxy for detecting frictional heat recorded by fault rocks. The degree of maturation of CM is controlled not only by maximum temperature but also by the heating <span class="hlt">rate</span>. Nevertheless, maximum slip zone temperature has been estimated previously by comparing the maturity of CM in natural fault rocks with that of synthetic products heated at <span class="hlt">rates</span> of about 1 °C s-1, even though this <span class="hlt">rate</span> is much lower than the actual heating <span class="hlt">rate</span> during an <span class="hlt">earthquake</span>. In this study, we investigated the kinetic effect of the heating <span class="hlt">rate</span> on the CM maturation process by performing organochemical analyses of CM heated at slow (1 °C s-1) and fast (100 °C s-1) <span class="hlt">rates</span>. The results clearly showed that a higher heating <span class="hlt">rate</span> can inhibit the maturation reactions of CM; for example, extinction of aliphatic hydrocarbon chains occurred at 600 °C at a heating <span class="hlt">rate</span> of 1 °C s-1 and at 900 °C at a heating <span class="hlt">rate</span> of 100 °C s-1. However, shear-enhanced mechanochemical effects can also promote CM maturation reactions and may offset the effect of a high heating <span class="hlt">rate</span>. We should thus consider simultaneously the effects of both heating <span class="hlt">rate</span> and mechanochemistry on CM maturation to establish CM as a more rigorous proxy for frictional heat recorded by fault rocks and for estimating slip behaviors during <span class="hlt">earthquake</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..492.1107Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..492.1107Z"><span>Modified two-layer social force <span class="hlt">model</span> for emergency <span class="hlt">earthquake</span> evacuation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Hao; Liu, Hong; Qin, Xin; Liu, Baoxi</p> <p>2018-02-01</p> <p>Studies of crowd behavior with related research on computer simulation provide an effective basis for architectural design and effective crowd management. Based on low-density group organization patterns, a modified two-layer social force <span class="hlt">model</span> is proposed in this paper to simulate and reproduce a group gathering process. First, this paper studies evacuation videos from the Luan'xian <span class="hlt">earthquake</span> in 2012, and extends the study of group organization patterns to a higher density. Furthermore, taking full advantage of the strength in crowd gathering simulations, a new method on grouping and guidance is proposed while using crowd dynamics. Second, a real-life grouping situation in <span class="hlt">earthquake</span> evacuation is simulated and reproduced. Comparing with the fundamental social force <span class="hlt">model</span> and existing guided crowd <span class="hlt">model</span>, the modified <span class="hlt">model</span> reduces congestion time and truly reflects group behaviors. Furthermore, the experiment result also shows that a stable group pattern and a suitable leader could decrease collision and allow a safer evacuation process.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1816873S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1816873S"><span>Rheological behavior of the crust and mantle in subduction zones in the time-scale range from <span class="hlt">earthquake</span> (minute) to mln years inferred from thermomechanical <span class="hlt">model</span> and geodetic observations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sobolev, Stephan; Muldashev, Iskander</p> <p>2016-04-01</p> <p>The key achievement of the geodynamic <span class="hlt">modelling</span> community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological <span class="hlt">models</span> to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-scale phenomenon with the time-scales spanning from geological to <span class="hlt">earthquake</span> scale with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-scale thermomechanical <span class="hlt">model</span> that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and <span class="hlt">rate</span>-and-state friction plasticity. First we generate a thermo-mechanical <span class="hlt">model</span> of subduction zone at geological time-scale including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same <span class="hlt">model</span> classic <span class="hlt">rate</span>-and state friction law in subduction channel, leading to stick-slip instability. This <span class="hlt">model</span> generates spontaneous <span class="hlt">earthquake</span> sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the <span class="hlt">earthquake</span> to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this <span class="hlt">model</span> predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) <span class="hlt">earthquakes</span>. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku <span class="hlt">Earthquake</span> for the day-to-4year time range.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1911966S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1911966S"><span>Three dimensional <span class="hlt">modelling</span> of <span class="hlt">earthquake</span> rupture cycles on frictional faults</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Simpson, Guy; May, Dave</p> <p>2017-04-01</p> <p>We are developing an efficient MPI-parallel numerical method to simulate <span class="hlt">earthquake</span> sequences on preexisting faults embedding within a three dimensional viscoelastic half-space. We solve the velocity form of the elasto(visco)dynamic equations using a continuous Galerkin Finite Element Method on an unstructured pentahedral mesh, which thus permits local spatial refinement in the vicinity of the fault. Friction sliding is coupled to the viscoelastic solid via <span class="hlt">rate</span>- and state-dependent friction laws using the split-node technique. Our coupled formulation employs a picard-type non-linear solver with a fully implicit, first order accurate time integrator that utilises an adaptive time step that efficiently evolves the system through multiple seismic cycles. The implementation leverages advanced parallel solvers, preconditioners and linear algebra from the Portable Extensible Toolkit for Scientific Computing (PETSc) library. The <span class="hlt">model</span> can treat heterogeneous frictional properties and stress states on the fault and surrounding solid as well as non-planar fault geometries. Preliminary tests show that the <span class="hlt">model</span> successfully reproduces dynamic rupture on a vertical strike-slip fault in a half-space governed by <span class="hlt">rate</span>-state friction with the ageing law.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70032496','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70032496"><span>Evidence for <span class="hlt">earthquake</span> triggering of large landslides in coastal Oregon, USA</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Schulz, W.H.; Galloway, S.L.; Higgins, J.D.</p> <p>2012-01-01</p> <p>Landslides are ubiquitous along the Oregon coast. Many are large, deep slides in sedimentary rock and are dormant or active only during the rainy season. Morphology, observed movement <span class="hlt">rates</span>, and total movement suggest that many are at least several hundreds of years old. The offshore Cascadia subduction zone produces great <span class="hlt">earthquakes</span> every 300–500 years that generate tsunami that inundate the coast within minutes. Many slides and slide-prone areas underlie tsunami evacuation and emergency response routes. We evaluated the likelihood of existing and future large rockslides being triggered by pore-water pressure increase or <span class="hlt">earthquake</span>-induced ground motion using field observations and <span class="hlt">modeling</span> of three typical slides. Monitoring for 2–9 years indicated that the rockslides reactivate when pore pressures exceed readily identifiable levels. Measurements of total movement and observed movement <span class="hlt">rates</span> suggest that two of the rockslides are 296–336 years old (the third could not be dated). The most recent great Cascadia <span class="hlt">earthquake</span> was M 9.0 and occurred during January 1700, while regional climatological conditions have been stable for at least the past 600 years. Hence, the estimated ages of the slides support <span class="hlt">earthquake</span> ground motion as their triggering mechanism. Limit-equilibrium slope-stability <span class="hlt">modeling</span> suggests that increased pore-water pressures could not trigger formation of the observed slides, even when accompanied by progressive strength loss. <span class="hlt">Modeling</span> suggests that ground accelerations comparable to those recorded at geologically similar sites during the M 9.0, 11 March 2011 Japan Trench subduction-zone <span class="hlt">earthquake</span> would trigger formation of the rockslides. Displacement <span class="hlt">modeling</span> following the Newmark approach suggests that the rockslides would move only centimeters upon coseismic formation; however, coseismic reactivation of existing rockslides would involve meters of displacement. Our findings provide better understanding of the dynamic coastal bluff</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S23F..07H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S23F..07H"><span>Constitutive law for seismicity <span class="hlt">rate</span> based on <span class="hlt">rate</span> and state friction: Dieterich 1994 revisited.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Heimisson, E. R.; Segall, P.</p> <p>2017-12-01</p> <p>Dieterich [1994] derived a constitutive law for seismicity <span class="hlt">rate</span> based on <span class="hlt">rate</span> and state friction, which has been applied widely to aftershocks, <span class="hlt">earthquake</span> triggering, and induced seismicity in various geological settings. Here, this influential work is revisited, and re-derived in a more straightforward manner. By virtue of this new derivation the <span class="hlt">model</span> is generalized to include changes in effective normal stress associated with background seismicity. Furthermore, the general case when seismicity <span class="hlt">rate</span> is not constant under constant stressing <span class="hlt">rate</span> is formulated. The new derivation provides directly practical integral expressions for the cumulative number of events and <span class="hlt">rate</span> of seismicity for arbitrary stressing history. Arguably, the most prominent limitation of Dieterich's 1994 theory is the assumption that seismic sources do not interact. Here we derive a constitutive relationship that considers source interactions between sub-volumes of the crust, where the stress in each sub-volume is assumed constant. Interactions are considered both under constant stressing <span class="hlt">rate</span> conditions and for arbitrary stressing history. This theory can be used to <span class="hlt">model</span> seismicity <span class="hlt">rate</span> due to stress changes or to estimate stress changes using observed seismicity from triggered <span class="hlt">earthquake</span> swarms where <span class="hlt">earthquake</span> interactions and magnitudes are take into account. We identify special conditions under which influence of interactions cancel and the predictions reduces to those of Dieterich 1994. This remarkable result may explain the apparent success of the <span class="hlt">model</span> when applied to observations of triggered seismicity. This approach has application to understanding and <span class="hlt">modeling</span> induced and triggered seismicity, and the quantitative interpretation of geodetic and seismic data. It enables simultaneous <span class="hlt">modeling</span> of geodetic and seismic data in a self-consistent framework. To date physics-based <span class="hlt">modeling</span> of seismicity with or without geodetic data has been found to give insight into various processes</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008PApGe.165..777A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008PApGe.165..777A"><span><span class="hlt">Earthquakes</span>: Recurrence and Interoccurrence Times</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.</p> <p>2008-04-01</p> <p>The purpose of this paper is to discuss the statistical distributions of recurrence times of <span class="hlt">earthquakes</span>. Recurrence times are the time intervals between successive <span class="hlt">earthquakes</span> at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic <span class="hlt">earthquakes</span> on the San Andreas fault: (1) The Parkfield <span class="hlt">earthquakes</span>, (2) the sequence of <span class="hlt">earthquakes</span> identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating <span class="hlt">earthquakes</span> at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of <span class="hlt">earthquakes</span> in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of <span class="hlt">earthquakes</span> obtained from a one million year “Virtual California” simulation of San Andreas <span class="hlt">earthquakes</span>. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other <span class="hlt">model</span> studies. The first is a modified forest-fire <span class="hlt">model</span> and the second is a slider-block <span class="hlt">model</span>. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future <span class="hlt">earthquakes</span> on the San Andreas fault and elsewhere.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S53B0696F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S53B0696F"><span><span class="hlt">Earthquake</span> Forecasting System in Italy</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.</p> <p>2017-12-01</p> <p>In Italy, after the 2009 L'Aquila <span class="hlt">earthquake</span>, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive <span class="hlt">earthquake</span>. The most striking time dependency of the <span class="hlt">earthquake</span> occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational <span class="hlt">Earthquake</span> Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable <span class="hlt">earthquake</span> forecasting system developed at CPS is based on ensemble <span class="hlt">modeling</span> and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of <span class="hlt">Earthquake</span> Predictability (CSEP, international infrastructure aimed at evaluating quantitatively <span class="hlt">earthquake</span> prediction and forecast <span class="hlt">models</span> through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term <span class="hlt">models</span> were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term <span class="hlt">Earthquake</span> Probabilities (STEP). Here, we report the results from OEF's 24hour <span class="hlt">earthquake</span> forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003EAEJA....11382J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003EAEJA....11382J"><span><span class="hlt">Modelling</span> low-frequency volcanic <span class="hlt">earthquakes</span> in a viscoelastic medium with topography</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jousset, P.; Neuberg, J.</p> <p>2003-04-01</p> <p>Magma properties are fundamental to explain the volcanic eruption style as well as the generation and propagation of seismic waves. This study focusses on rheological magma properties and their impact on low-frequency volcanic <span class="hlt">earthquakes</span>. We investigate the effects of anelasticity and topography on the amplitudes and spectra of synthetic low-frequency <span class="hlt">earthquakes</span>. Using a 2D finite difference scheme, we <span class="hlt">model</span> the propagation of seismic energy initiated in a fluid-filled conduit embedded in a 2D homogeneous viscoelastic medium with topography. Topography is introduced by using a mapping procedure that stretches the computational rectangular grid into a grid which follows the topography. We <span class="hlt">model</span> intrinsic attenuation by linear viscoelastic theory and we show that volcanic media can be approximated by a standard linear solid for seismic frequencies (i.e., above 2 Hz). Results demonstrate that attenuation modifies both amplitude and dispersive characteristics of low-frequency <span class="hlt">earthquakes</span>. Low-frequency events are dispersive by nature; however, if attenuation is introduced, their dispersion characteristics will be altered. The topography modifies the amplitudes, depending on the position of seismographs at the surface. This study shows that we need to take into account attenuation and topography to interpret correctly observed low-frequency volcanic <span class="hlt">earthquakes</span>. It also suggests that the rheological properties of magmas may be constrained by the analysis of low-frequency seismograms.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16605393','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16605393"><span>Nonextensive <span class="hlt">models</span> for <span class="hlt">earthquakes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Silva, R; França, G S; Vilar, C S; Alcaniz, J S</p> <p>2006-02-01</p> <p>We have revisited the fragment-asperity interaction <span class="hlt">model</span> recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the <span class="hlt">earthquake</span> energy and the size of fragment epsilon proportional to r3. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.S53A2794B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.S53A2794B"><span><span class="hlt">Earthquake</span> Fingerprints: Representing <span class="hlt">Earthquake</span> Waveforms for Similarity-Based Detection</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bergen, K.; Beroza, G. C.</p> <p>2016-12-01</p> <p>New <span class="hlt">earthquake</span> detection methods, such as Fingerprint and Similarity Thresholding (FAST), use fast approximate similarity search to identify similar waveforms in long-duration data without templates (Yoon et al. 2015). These methods have two key components: fingerprint extraction and an efficient search algorithm. Fingerprint extraction converts waveforms into fingerprints, compact signatures that represent short-duration waveforms for identification and search. <span class="hlt">Earthquakes</span> are detected using an efficient indexing and search scheme, such as locality-sensitive hashing, that identifies similar waveforms in a fingerprint database. The quality of the search results, and thus the <span class="hlt">earthquake</span> detection results, is strongly dependent on the fingerprinting scheme. Fingerprint extraction should map similar <span class="hlt">earthquake</span> waveforms to similar waveform fingerprints to ensure a high detection <span class="hlt">rate</span>, even under additive noise and small distortions. Additionally, fingerprints corresponding to noise intervals should have mutually dissimilar fingerprints to minimize false detections. In this work, we compare the performance of multiple fingerprint extraction approaches for the <span class="hlt">earthquake</span> waveform similarity search problem. We apply existing audio fingerprinting (used in content-based audio identification systems) and time series indexing techniques and present modified versions that are specifically adapted for seismic data. We also explore data-driven fingerprinting approaches that can take advantage of labeled or unlabeled waveform data. For each fingerprinting approach we measure its ability to identify similar waveforms in a low signal-to-noise setting, and quantify the trade-off between true and false detection <span class="hlt">rates</span> in the presence of persistent noise sources. We compare the performance using known event waveforms from eight independent stations in the Northern California Seismic Network.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.G53A0765L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.G53A0765L"><span>GPS-derived Coseismic deformations of the 2016 Aktao Ms6.7 <span class="hlt">earthquake</span> and source <span class="hlt">modelling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, J.; Zhao, B.; Xiaoqiang, W.; Daiqing, L.; Yushan, A.</p> <p>2017-12-01</p> <p>On 25th November 2016, a Ms6.7 <span class="hlt">earthquake</span> occurred on Aktao, a county of Xinjiang, China. This <span class="hlt">earthquake</span> was the largest <span class="hlt">earthquake</span> occurred in the northeastern margin of the Pamir Plateau in the last 30 years. By GPS observation, we get the coseismic displacement of this <span class="hlt">earthquake</span>. The maximum displacement site is located in the Muji Basin, 15km from south of the causative fault. The maximum deformation is down to 0.12m, and 0.10m for coseismic displacement, our results indicate that the <span class="hlt">earthquake</span> has the characteristics of dextral strike-slip and normal-fault rupture. Based on the GPS results, we inverse the rupture distribution of the <span class="hlt">earthquake</span>. The source <span class="hlt">model</span> is consisted of two approximate independent zones with a depth of less than 20km, the maximum displacement of one zone is 0.6m, the other is 0.4m. The total seismic moment is Mw6.6.1 which is calculated by the geodetic inversion. The source <span class="hlt">model</span> of GPS-derived is basically consistent with that of seismic waveform inversion, and is consistent with the surface rupture distribution obtained from field investigation. According to our inversion calculation, the recurrence period of strong <span class="hlt">earthquakes</span> similar to this <span class="hlt">earthquake</span> should be 30 60 years, and the seismic risk of the eastern segment of Muji fault is worthy of attention. This research is financially supported by National Natural Science Foundation of China (Grant No.41374030)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFM.S22C..01J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFM.S22C..01J"><span>Prospective testing of Coulomb short-term <span class="hlt">earthquake</span> forecasts</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.</p> <p>2009-12-01</p> <p><span class="hlt">Earthquake</span> induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future <span class="hlt">earthquakes</span>. <span class="hlt">Models</span> to estimate stress and the resulting seismicity changes could help to illuminate <span class="hlt">earthquake</span> physics and guide appropriate precautionary response. But do these <span class="hlt">models</span> have improved forecasting power compared to empirical statistical <span class="hlt">models</span>? The best answer lies in prospective testing in which a fully specified <span class="hlt">model</span>, with no subsequent parameter adjustments, is evaluated against future <span class="hlt">earthquakes</span>. The Center of Study of <span class="hlt">Earthquake</span> Predictability (CSEP) facilitates such prospective testing of <span class="hlt">earthquake</span> forecasts, including several short term forecasts. Formulating Coulomb stress <span class="hlt">models</span> for formal testing involves several practical problems, mostly shared with other short-term <span class="hlt">models</span>. First, <span class="hlt">earthquake</span> probabilities must be calculated after each “perpetrator” <span class="hlt">earthquake</span> but before the triggered <span class="hlt">earthquakes</span>, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term <span class="hlt">models</span> daily, and allows daily updates of the <span class="hlt">models</span>. However, lots can happen in a day. An alternative is to test and update <span class="hlt">models</span> on the occurrence of each <span class="hlt">earthquake</span> over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, <span class="hlt">earthquake</span> focal mechanisms, slip distributions, stress patterns, and <span class="hlt">earthquake</span> probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered <span class="hlt">earthquakes</span> are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative <span class="hlt">model</span> for detection threshold as a function of</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70136079','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70136079"><span>Geodetic constraints on the 2014 M 6.0 South Napa <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Barnhart, William D.; Murray, Jessica R.; Yun, S H; Svarc, Jerry L.; Samsonov, SV; Fielding, EJ; Brooks, Benjamin A.; Milillo, Pietro</p> <p>2014-01-01</p> <p>On 24 August 2014, the M 6.0 South Napa <span class="hlt">earthquake</span> shook much of the San Francisco Bay area, leading to significant damage in the Napa Valley. The <span class="hlt">earthquake</span> occurred in the vicinity of the West Napa fault (122.313° W, 38.22° N, 11.3 km), a mapped structure located between the Rodger’s Creek and Green Valley faults, with nearly pure right‐lateral strike‐slip motion (strike 157°, dip 77°, rake –169°; http://comcat.cr.usgs.gov/<span class="hlt">earthquakes</span>/eventpage/nc72282711#summary, last accessed December 2014) (Fig. 1). The West Napa fault previously experienced an M 5 strike‐slip event in 2000 but otherwise exhibited no previous definitive evidence of historic <span class="hlt">earthquake</span> rupture (Rodgers et al., 2008; Wesling and Hanson, 2008). Evans et al. (2012) found slip <span class="hlt">rates</span> of ∼9.5  mm/yr along the West Napa fault, with most slip <span class="hlt">rate</span> <span class="hlt">models</span> for the Bay area placing higher slip <span class="hlt">rates</span> and greater <span class="hlt">earthquake</span> potential on the Rodger’s Creek and Green Valley faults, respectively (e.g., Savage et al., 1999; d’Alessio et al., 2005; Funning et al., 2007).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70029492','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70029492"><span>A frictional population <span class="hlt">model</span> of seismicity <span class="hlt">rate</span> change</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Gomberg, J.; Reasenberg, P.; Cocco, M.; Belardinelli, M.E.</p> <p>2005-01-01</p> <p>We study <span class="hlt">models</span> of seismicity <span class="hlt">rate</span> changes caused by the application of a static stress perturbation to a population of faults and discuss our results with respect to the <span class="hlt">model</span> proposed by Dieterich (1994). These <span class="hlt">models</span> assume distribution of nucleation sites (e.g., faults) obeying <span class="hlt">rate</span>-state frictional relations that fail at constant <span class="hlt">rate</span> under tectonic loading alone, and predicts a positive static stress step at time to will cause an immediate increased seismicity <span class="hlt">rate</span> that decays according to Omori's law. We show one way in which the Dieterich <span class="hlt">model</span> may be constructed from simple general idead, illustratted using numerically computed synthetic seismicity and mathematical formulation. We show that seismicity <span class="hlt">rate</span> change predicted by these <span class="hlt">models</span> (1) depend on the particular relationship between the clock-advanced failure and fault maturity, (2) are largest for the faults closest to failure at to, (3) depend strongly on which state evolution law faults obey, and (4) are insensitive to some types of population hetrogeneity. We also find that if individual faults fail repeatedly and populations are finite, at timescales much longer than typical aftershock durations, quiescence follows at seismicity <span class="hlt">rate</span> increase regardless of the specific frictional relations. For the examined <span class="hlt">models</span> the quiescence duration is comparable to the ratio of stress change to stressing <span class="hlt">rate</span> ????/??,which occurs after a time comparable to the average recurrence interval of the individual faults in the population and repeats in the absence of any new load may pertubations; this simple <span class="hlt">model</span> may partly explain observations of repeated clustering of <span class="hlt">earthquakes</span>. Copyright 2005 by the American Geophysical Union.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70030235','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70030235"><span>12 May 2008 M = 7.9 Wenchuan, China, <span class="hlt">earthquake</span> calculated to increase failure stress and seismicity <span class="hlt">rate</span> on three major fault systems</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Toda, S.; Lin, J.; Meghraoui, M.; Stein, R.S.</p> <p>2008-01-01</p> <p>The Wenchuan <span class="hlt">earthquake</span> on the Longmen Shan fault zone devastated cities of Sichuan, claiming at least 69,000 lives. We calculate that the <span class="hlt">earthquake</span> also brought the Xianshuihe, Kunlun and Min Jiang faults 150-400 km from the mainshock rupture in the eastern Tibetan Plateau 0.2-0.5 bars closer to Coulomb failure. Because some portions of these stressed faults have not ruptured in more than a century, the <span class="hlt">earthquake</span> could trigger or hasten additional M > 7 <span class="hlt">earthquakes</span>, potentially subjecting regions from Kangding to Daofu and Maqin to Rangtag to strong shaking. We use the calculated stress changes and the observed background seismicity to forecast the <span class="hlt">rate</span> and distribution of damaging shocks. The <span class="hlt">earthquake</span> probability in the region is estimated to be 57-71% for M ??? 6 shocks during the next decade, and 8-12% for M ??? 7 shocks. These are up to twice the probabilities for the decade before the Wenchuan <span class="hlt">earthquake</span> struck. Copyright 2008 by the American Geophysical Union.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70020608','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70020608"><span><span class="hlt">Earthquake</span> triggering by transient and static deformations</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Gomberg, J.; Beeler, N.M.; Blanpied, M.L.; Bodin, P.</p> <p>1998-01-01</p> <p>Observational evidence for both static and transient near-field and far-field triggered seismicity are explained in terms of a frictional instability <span class="hlt">model</span>, based on a single degree of freedom spring-slider system and <span class="hlt">rate</span>- and state-dependent frictional constitutive equations. In this study a triggered <span class="hlt">earthquake</span> is one whose failure time has been advanced by ??t (clock advance) due to a stress perturbation. Triggering stress perturbations considered include square-wave transients and step functions, analogous to seismic waves and coseismic static stress changes, respectively. Perturbations are superimposed on a constant background stressing <span class="hlt">rate</span> which represents the tectonic stressing <span class="hlt">rate</span>. The normal stress is assumed to be constant. Approximate, closed-form solutions of the <span class="hlt">rate</span>-and-state equations are derived for these triggering and background loads, building on the work of Dieterich [1992, 1994]. These solutions can be used to simulate the effects of static and transient stresses as a function of amplitude, onset time t0, and in the case of square waves, duration. The accuracies of the approximate closed-form solutions are also evaluated with respect to the full numerical solution and t0. The approximate solutions underpredict the full solutions, although the difference decreases as t0, approaches the end of the <span class="hlt">earthquake</span> cycle. The relationship between ??t and t0 differs for transient and static loads: a static stress step imposed late in the cycle causes less clock advance than an equal step imposed earlier, whereas a later applied transient causes greater clock advance than an equal one imposed earlier. For equal ??t, transient amplitudes must be greater than static loads by factors of several tens to hundreds depending on t0. We show that the <span class="hlt">rate</span>-and-state <span class="hlt">model</span> requires that the total slip at failure is a constant, regardless of the loading history. Thus a static load applied early in the cycle, or a transient applied at any time, reduces the stress</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70123289','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70123289"><span>Global Omori law decay of triggered <span class="hlt">earthquakes</span>: large aftershocks outside the classical aftershock zone</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Parsons, Tom</p> <p>2002-01-01</p> <p>Triggered <span class="hlt">earthquakes</span> can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 <span class="hlt">earthquakes</span> in El Salvador. In this study, <span class="hlt">earthquakes</span> with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are <span class="hlt">modeled</span> as dislocations to calculate shear stress changes on subsequent <span class="hlt">earthquake</span> rupture planes near enough to be affected. About 61% of <span class="hlt">earthquakes</span> that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ∼39% are associated with shear stress decreases. If <span class="hlt">earthquakes</span> associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered <span class="hlt">earthquakes</span> obey an Omori law <span class="hlt">rate</span> decay that lasts between ∼7–11 years after the main shock. <span class="hlt">Earthquakes</span> associated with calculated shear stress increases occur at higher <span class="hlt">rates</span> than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of <span class="hlt">earthquakes</span>. If large triggered <span class="hlt">earthquakes</span> habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic <span class="hlt">rate</span> change with time and spatial distribution can be used to rapidly assess the likelihood of triggered <span class="hlt">earthquakes</span> following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador <span class="hlt">earthquake</span> where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002JGRB..107.2199P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002JGRB..107.2199P"><span>Global Omori law decay of triggered <span class="hlt">earthquakes</span>: Large aftershocks outside the classical aftershock zone</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Parsons, Tom</p> <p>2002-09-01</p> <p>Triggered <span class="hlt">earthquakes</span> can be large, damaging, and lethal as evidenced by the1999 shocks in Turkey and the 2001 <span class="hlt">earthquakes</span> in El Salvador. In this study, <span class="hlt">earthquakes</span> with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are <span class="hlt">modeled</span> as dislocations to calculate shear stress changes on subsequent <span class="hlt">earthquake</span> rupture planes near enough to be affected. About 61% of <span class="hlt">earthquakes</span> that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ˜39% are associated with shear stress decreases. If <span class="hlt">earthquakes</span> associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered <span class="hlt">earthquakes</span> obey an Omori law <span class="hlt">rate</span> decay that lasts between ˜7-11 years after the main shock. <span class="hlt">Earthquakes</span> associated with calculated shear stress increases occur at higher <span class="hlt">rates</span> than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of <span class="hlt">earthquakes</span>. If large triggered <span class="hlt">earthquakes</span> habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic <span class="hlt">rate</span> change with time and spatial distribution can be used to rapidly assess the likelihood of triggered <span class="hlt">earthquakes</span> following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador <span class="hlt">earthquake</span> where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.S33B4509G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.S33B4509G"><span>Slip reactivation <span class="hlt">model</span> for the 2011 Mw9 Tohoku <span class="hlt">earthquake</span>: Dynamic rupture, sea floor displacements and tsunami simulations.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.</p> <p>2014-12-01</p> <p>The 2011 Mw9 Tohoku <span class="hlt">earthquake</span> has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source <span class="hlt">model</span> obtained from the inversion of strong motion data performed by Lee's et al (2011). In this <span class="hlt">model</span> two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-<span class="hlt">rate</span> snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large <span class="hlt">earthquakes</span> may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large <span class="hlt">earthquakes</span> the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture <span class="hlt">model</span> to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our <span class="hlt">model</span> starts like a M7-8 <span class="hlt">earthquake</span> breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the <span class="hlt">earthquake</span> into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NPGeo..24..179B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NPGeo..24..179B"><span>Sandpile-based <span class="hlt">model</span> for capturing magnitude distributions and spatiotemporal clustering and separation in regional <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.</p> <p>2017-04-01</p> <p>We propose a cellular automata <span class="hlt">model</span> for <span class="hlt">earthquake</span> occurrences patterned after the sandpile <span class="hlt">model</span> of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the <span class="hlt">model</span> successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, <span class="hlt">model</span> statistics show remarkable comparison with long-period empirical data from <span class="hlt">earthquakes</span> from different seismogenic regions. The proposed <span class="hlt">model</span> has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of <span class="hlt">earthquakes</span> by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in <span class="hlt">earthquake</span>-generating regions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AcGeo.tmp...51L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AcGeo.tmp...51L"><span><span class="hlt">Modeling</span> of the strong ground motion of 25th April 2015 Nepal <span class="hlt">earthquake</span> using modified semi-empirical technique</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lal, Sohan; Joshi, A.; Sandeep; Tomer, Monu; Kumar, Parveen; Kuo, Chun-Hsiang; Lin, Che-Min; Wen, Kuo-Liang; Sharma, M. L.</p> <p>2018-05-01</p> <p>On 25th April, 2015 a hazardous <span class="hlt">earthquake</span> of moment magnitude 7.9 occurred in Nepal. Accelerographs were used to record the Nepal <span class="hlt">earthquake</span> which is installed in the Kumaon region in the Himalayan state of Uttrakhand. The distance of the recorded stations in the Kumaon region from the epicenter of the <span class="hlt">earthquake</span> is about 420-515 km. Modified semi-empirical technique of <span class="hlt">modeling</span> finite faults has been used in this paper to simulate strong <span class="hlt">earthquake</span> at these stations. Source parameters of the Nepal aftershock have been also calculated using the Brune <span class="hlt">model</span> in the present study which are used in the <span class="hlt">modeling</span> of the Nepal main shock. The obtained value of the seismic moment and stress drop is 8.26 × 1025 dyn cm and 10.48 bar, respectively, for the aftershock from the Brune <span class="hlt">model</span> .The simulated <span class="hlt">earthquake</span> time series were compared with the observed records of the <span class="hlt">earthquake</span>. The comparison of full waveform and its response spectra has been made to finalize the rupture parameters and its location. The rupture of the <span class="hlt">earthquake</span> was propagated in the NE-SW direction from the hypocenter with the rupture velocity 3.0 km/s from a distance of 80 km from Kathmandu in NW direction at a depth of 12 km as per compared results.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://dx.doi.org/10.1785/0220160155','USGSPUBS'); return false;" href="https://dx.doi.org/10.1785/0220160155"><span>Dynamic strains for <span class="hlt">earthquake</span> source characterization</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Barbour, Andrew J.; Crowell, Brendan W</p> <p>2017-01-01</p> <p>Strainmeters measure elastodynamic deformation associated with <span class="hlt">earthquakes</span> over a broad frequency band, with detection characteristics that complement traditional instrumentation, but they are commonly used to study slow transient deformation along active faults and at subduction zones, for example. Here, we analyze dynamic strains at Plate Boundary Observatory (PBO) borehole strainmeters (BSM) associated with 146 local and regional <span class="hlt">earthquakes</span> from 2004–2014, with magnitudes from M 4.5 to 7.2. We find that peak values in seismic strain can be predicted from a general regression against distance and magnitude, with improvements in accuracy gained by accounting for biases associated with site–station effects and source–path effects, the latter exhibiting the strongest influence on the regression coefficients. To account for the influence of these biases in a general way, we include crustal‐type classifications from the CRUST1.0 global velocity <span class="hlt">model</span>, which demonstrates that high‐frequency strain data from the PBO BSM network carry information on crustal structure and fault mechanics: <span class="hlt">earthquakes</span> nucleating offshore on the Blanco fracture zone, for example, generate consistently lower dynamic strains than <span class="hlt">earthquakes</span> around the Sierra Nevada microplate and in the Salton trough. Finally, we test our dynamic strain prediction equations on the 2011 M 9 Tohoku‐Oki <span class="hlt">earthquake</span>, specifically continuous strain records derived from triangulation of 137 high‐<span class="hlt">rate</span> Global Navigation Satellite System Earth Observation Network stations in Japan. Moment magnitudes inferred from these data and the strain <span class="hlt">model</span> are in agreement when Global Positioning System subnetworks are unaffected by spatial aliasing.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24993347','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24993347"><span>Induced <span class="hlt">earthquakes</span>. Sharp increase in central Oklahoma seismicity since 2008 induced by massive wastewater injection.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Keranen, K M; Weingarten, M; Abers, G A; Bekins, B A; Ge, S</p> <p>2014-07-25</p> <p>Unconventional oil and gas production provides a rapidly growing energy source; however, high-production states in the United States, such as Oklahoma, face sharply rising numbers of <span class="hlt">earthquakes</span>. Subsurface pressure data required to unequivocally link <span class="hlt">earthquakes</span> to wastewater injection are rarely accessible. Here we use seismicity and hydrogeological <span class="hlt">models</span> to show that fluid migration from high-<span class="hlt">rate</span> disposal wells in Oklahoma is potentially responsible for the largest swarm. <span class="hlt">Earthquake</span> hypocenters occur within disposal formations and upper basement, between 2- and 5-kilometer depth. The <span class="hlt">modeled</span> fluid pressure perturbation propagates throughout the same depth range and tracks <span class="hlt">earthquakes</span> to distances of 35 kilometers, with a triggering threshold of ~0.07 megapascals. Although thousands of disposal wells operate aseismically, four of the highest-<span class="hlt">rate</span> wells are capable of inducing 20% of 2008 to 2013 central U.S. seismicity. Copyright © 2014, American Association for the Advancement of Science.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/946928','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/946928"><span>Ground motion <span class="hlt">modeling</span> of the 1906 San Francisco <span class="hlt">earthquake</span> II: Ground motion estimates for the 1906 <span class="hlt">earthquake</span> and scenario events</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Aagaard, B; Brocher, T; Dreger, D</p> <p>2007-02-09</p> <p>We estimate the ground motions produced by the 1906 San Francisco <span class="hlt">earthquake</span> making use of the recently developed Song et al. (2008) source <span class="hlt">model</span> that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity <span class="hlt">models</span>. Our estimates of the ground motions for the 1906 <span class="hlt">earthquake</span> are consistent across five ground-motion <span class="hlt">modeling</span> groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sitesmore » throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large <span class="hlt">earthquakes</span> on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 <span class="hlt">earthquake</span>. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/of/2007/1072/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/of/2007/1072/"><span>Comprehensive Areal <span class="hlt">Model</span> of <span class="hlt">Earthquake</span>-Induced Landslides: Technical Specification and User Guide</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Miles, Scott B.; Keefer, David K.</p> <p>2007-01-01</p> <p>This report describes the complete design of a comprehensive areal <span class="hlt">model</span> of earthquakeinduced landslides (CAMEL). This report presents the design process, technical specification of CAMEL. It also provides a guide to using the CAMEL source code and template ESRI ArcGIS map document file for applying CAMEL, both of which can be obtained by contacting the authors. CAMEL is a regional-scale <span class="hlt">model</span> of <span class="hlt">earthquake</span>-induced landslide hazard developed using fuzzy logic systems. CAMEL currently estimates areal landslide concentration (number of landslides per square kilometer) of six aggregated types of <span class="hlt">earthquake</span>-induced landslides - three types each for rock and soil.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70192304','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70192304"><span>Delayed seismicity <span class="hlt">rate</span> changes controlled by static stress transfer</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Kroll, Kayla A.; Richards-Dinger, Keith B.; Dieterich, James H.; Cochran, Elizabeth S.</p> <p>2017-01-01</p> <p>On 15 June 2010, a Mw5.7 <span class="hlt">earthquake</span> occurred near Ocotillo, California, in the Yuha Desert. This event was the largest aftershock of the 4 April 2010 Mw7.2 El Mayor-Cucapah (EMC) <span class="hlt">earthquake</span> in this region. The EMC mainshock and subsequent Ocotillo aftershock provide an opportunity to test the Coulomb failure hypothesis (CFS). We explore the spatiotemporal correlation between seismicity <span class="hlt">rate</span> changes and regions of positive and negative CFS change imparted by the Ocotillo event. Based on simple CFS calculations we divide the Yuha Desert into three subregions, one triggering zone and two stress shadow zones. We find the nominal triggering zone displays immediate triggering, one stress shadowed region experiences immediate quiescence, and the other nominal stress shadow undergoes an immediate <span class="hlt">rate</span> increase followed by a delayed shutdown. We quantitatively <span class="hlt">model</span> the spatiotemporal variation of <span class="hlt">earthquake</span> <span class="hlt">rates</span> by combining calculations of CFS change with the <span class="hlt">rate</span>-state <span class="hlt">earthquake</span> <span class="hlt">rate</span> formulation of Dieterich (1994), assuming that each subregion contains a mixture of nucleation sources that experienced a CFS change of differing signs. Our <span class="hlt">modeling</span> reproduces the observations, including the observed delay in the stress shadow effect in the third region following the Ocotillo aftershock. The delayed shadow effect occurs because of intrinsic differences in the amplitude of the <span class="hlt">rate</span> response to positive and negative stress changes and the time constants for return to background <span class="hlt">rates</span> for the two populations. We find that <span class="hlt">rate</span>-state <span class="hlt">models</span> of time-dependent <span class="hlt">earthquake</span> <span class="hlt">rates</span> are in good agreement with the observed <span class="hlt">rates</span> and thus explain the complex spatiotemporal patterns of seismicity.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGRB..122.7951K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGRB..122.7951K"><span>Delayed Seismicity <span class="hlt">Rate</span> Changes Controlled by Static Stress Transfer</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kroll, Kayla A.; Richards-Dinger, Keith B.; Dieterich, James H.; Cochran, Elizabeth S.</p> <p>2017-10-01</p> <p>On 15 June 2010, a Mw5.7 <span class="hlt">earthquake</span> occurred near Ocotillo, California, in the Yuha Desert. This event was the largest aftershock of the 4 April 2010 Mw7.2 El Mayor-Cucapah (EMC) <span class="hlt">earthquake</span> in this region. The EMC mainshock and subsequent Ocotillo aftershock provide an opportunity to test the Coulomb failure hypothesis (CFS). We explore the spatiotemporal correlation between seismicity <span class="hlt">rate</span> changes and regions of positive and negative CFS change imparted by the Ocotillo event. Based on simple CFS calculations we divide the Yuha Desert into three subregions, one triggering zone and two stress shadow zones. We find the nominal triggering zone displays immediate triggering, one stress shadowed region experiences immediate quiescence, and the other nominal stress shadow undergoes an immediate <span class="hlt">rate</span> increase followed by a delayed shutdown. We quantitatively <span class="hlt">model</span> the spatiotemporal variation of <span class="hlt">earthquake</span> <span class="hlt">rates</span> by combining calculations of CFS change with the <span class="hlt">rate</span>-state <span class="hlt">earthquake</span> <span class="hlt">rate</span> formulation of Dieterich (1994), assuming that each subregion contains a mixture of nucleation sources that experienced a CFS change of differing signs. Our <span class="hlt">modeling</span> reproduces the observations, including the observed delay in the stress shadow effect in the third region following the Ocotillo aftershock. The delayed shadow effect occurs because of intrinsic differences in the amplitude of the <span class="hlt">rate</span> response to positive and negative stress changes and the time constants for return to background <span class="hlt">rates</span> for the two populations. We find that <span class="hlt">rate</span>-state <span class="hlt">models</span> of time-dependent <span class="hlt">earthquake</span> <span class="hlt">rates</span> are in good agreement with the observed <span class="hlt">rates</span> and thus explain the complex spatiotemporal patterns of seismicity.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012Tectp.524..155Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012Tectp.524..155Z"><span>Scoring annual <span class="hlt">earthquake</span> predictions in China</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhuang, Jiancang; Jiang, Changsheng</p> <p>2012-02-01</p> <p>The Annual Consultation Meeting on <span class="hlt">Earthquake</span> Tendency in China is held by the China <span class="hlt">Earthquake</span> Administration (CEA) in order to provide one-year <span class="hlt">earthquake</span> predictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest <span class="hlt">earthquake</span> expected during the next year. Evaluating the performance of these <span class="hlt">earthquake</span> predictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these <span class="hlt">earthquake</span> predictions. Based on a reference <span class="hlt">model</span>, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson <span class="hlt">model</span>, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for <span class="hlt">earthquake</span> magnitudes as the reference <span class="hlt">model</span>, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference <span class="hlt">model</span> (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference <span class="hlt">model</span>. The predictions made by the Annual Consultation Meetings on <span class="hlt">Earthquake</span> Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference <span class="hlt">model</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70032137','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70032137"><span>Ground-motion <span class="hlt">modeling</span> of the 1906 San Francisco <span class="hlt">Earthquake</span>, part II: Ground-motion estimates for the 1906 <span class="hlt">earthquake</span> and scenario events</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.</p> <p>2008-01-01</p> <p>We estimate the ground motions produce by the 1906 San Francisco <span class="hlt">earthquake</span> making use of the recently developed Song et al. (2008) source <span class="hlt">model</span> that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity <span class="hlt">models</span>. Our estimates of the ground motions for the 1906 <span class="hlt">earthquake</span> are consistent across five ground-motion <span class="hlt">modeling</span> groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large <span class="hlt">earthquakes</span> on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 <span class="hlt">earthquake</span>. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.213..676B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.213..676B"><span>Seismic quiescence in a frictional <span class="hlt">earthquake</span> <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Braun, Oleg M.; Peyrard, Michel</p> <p>2018-04-01</p> <p>We investigate the origin of seismic quiescence with a generalized version of the Burridge-Knopoff <span class="hlt">model</span> for <span class="hlt">earthquakes</span> and show that it can be generated by a multipeaked probability distribution of the thresholds at which contacts break. Such a distribution is not assumed a priori but naturally results from the aging of the contacts. We show that the <span class="hlt">model</span> can exhibit quiescence as well as enhanced foreshock activity, depending on the value of some parameters. This provides a generic understanding for seismic quiescence, which encompasses earlier specific explanations and could provide a pathway for a classification of faults.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70029548','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70029548"><span>Time-dependent <span class="hlt">earthquake</span> probabilities</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.</p> <p>2005-01-01</p> <p>We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large <span class="hlt">earthquake</span>, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized <span class="hlt">rate</span> change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between <span class="hlt">models</span> of seismicity <span class="hlt">rate</span> changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure <span class="hlt">rate</span> corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past <span class="hlt">earthquakes</span> have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault <span class="hlt">models</span> that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch <span class="hlt">models</span> differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault <span class="hlt">models</span>, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these <span class="hlt">models</span>. Copyright 2005 by the American Geophysical Union.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S14B..06C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S14B..06C"><span>Rapid <span class="hlt">modeling</span> of complex multi-fault ruptures with simplistic <span class="hlt">models</span> from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Crowell, B.; Melgar, D.</p> <p>2017-12-01</p> <p>The 2016 Mw 7.8 Kaikoura <span class="hlt">earthquake</span> is one of the most complex <span class="hlt">earthquakes</span> in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform <span class="hlt">earthquake</span> hazards <span class="hlt">models</span> in the future. However, events like Kaikoura beg the question of how well (or how poorly) such <span class="hlt">earthquakes</span> can be <span class="hlt">modeled</span> automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura <span class="hlt">earthquake</span> with the G-FAST early warning module. We first perform simple point source <span class="hlt">models</span> of the <span class="hlt">earthquake</span> using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward <span class="hlt">model</span> near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not <span class="hlt">earthquake</span> source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our <span class="hlt">models</span> for the Kaikoura <span class="hlt">earthquake</span> are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009EGUGA..1113961A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009EGUGA..1113961A"><span>GEM1: First-year <span class="hlt">modeling</span> and IT activities for the Global <span class="hlt">Earthquake</span> <span class="hlt">Model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anderson, G.; Giardini, D.; Wiemer, S.</p> <p>2009-04-01</p> <p>GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for <span class="hlt">modeling</span> and communicating <span class="hlt">earthquake</span> risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future <span class="hlt">earthquakes</span>. GEM will provide a unified set of seismic hazard, risk, and loss <span class="hlt">modeling</span> tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified <span class="hlt">earthquake</span> catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all <span class="hlt">modeling</span> activities, incorporate existing detailed <span class="hlt">models</span> where possible, and independently test all resulting tools and <span class="hlt">models</span>. When completed in five years, GEM will have a versatile, penly accessible <span class="hlt">modeling</span> environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss <span class="hlt">models</span> to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70031901','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70031901"><span>Implications of the 26 December 2004 Sumatra-Andaman <span class="hlt">earthquake</span> on tsunami forecast and assessment <span class="hlt">models</span> for great subduction-zone <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Geist, Eric L.; Titov, Vasily V.; Arcas, Diego; Pollitz, Fred F.; Bilek, Susan L.</p> <p>2007-01-01</p> <p>Results from different tsunami forecasting and hazard assessment <span class="hlt">models</span> are compared with observed tsunami wave heights from the 26 December 2004 Indian Ocean tsunami. Forecast <span class="hlt">models</span> are based on initial <span class="hlt">earthquake</span> information and are used to estimate tsunami wave heights during propagation. An empirical forecast relationship based only on seismic moment provides a close estimate to the observed mean regional and maximum local tsunami runup heights for the 2004 Indian Ocean tsunami but underestimates mean regional tsunami heights at azimuths in line with the tsunami beaming pattern (e.g., Sri Lanka, Thailand). Standard forecast <span class="hlt">models</span> developed from subfault discretization of <span class="hlt">earthquake</span> rupture, in which deep- ocean sea level observations are used to constrain slip, are also tested. Forecast <span class="hlt">models</span> of this type use tsunami time-series measurements at points in the deep ocean. As a proxy for the 2004 Indian Ocean tsunami, a transect of deep-ocean tsunami amplitudes recorded by satellite altimetry is used to constrain slip along four subfaults of the M >9 Sumatra–Andaman <span class="hlt">earthquake</span>. This proxy <span class="hlt">model</span> performs well in comparison to observed tsunami wave heights, travel times, and inundation patterns at Banda Aceh. Hypothetical tsunami hazard assessments <span class="hlt">models</span> based on end- member estimates for average slip and rupture length (Mw 9.0–9.3) are compared with tsunami observations. Using average slip (low end member) and rupture length (high end member) (Mw 9.14) consistent with many seismic, geodetic, and tsunami inversions adequately estimates tsunami runup in most regions, except the extreme runup in the western Aceh province. The high slip that occurred in the southern part of the rupture zone linked to runup in this location is a larger fluctuation than expected from standard stochastic slip <span class="hlt">models</span>. In addition, excess moment release (∼9%) deduced from geodetic studies in comparison to seismic moment estimates may generate additional tsunami energy, if the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22184228','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22184228"><span>Global risk of big <span class="hlt">earthquakes</span> has not recently increased.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shearer, Peter M; Stark, Philip B</p> <p>2012-01-17</p> <p>The recent elevated <span class="hlt">rate</span> of large <span class="hlt">earthquakes</span> has fueled concern that the underlying global <span class="hlt">rate</span> of <span class="hlt">earthquake</span> activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of large (magnitude M≥7) <span class="hlt">earthquakes</span> from 1900 to the present, after removing local clustering related to aftershocks. The global <span class="hlt">rate</span> of M≥8 <span class="hlt">earthquakes</span> has been at a record high roughly since 2004, but <span class="hlt">rates</span> have been almost as high before, and the <span class="hlt">rate</span> of smaller <span class="hlt">earthquakes</span> is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences--if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global <span class="hlt">rate</span> of large events. Together these facts suggest that the global risk of large <span class="hlt">earthquakes</span> is no higher today than it has been in the past.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3271898','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3271898"><span>Global risk of big <span class="hlt">earthquakes</span> has not recently increased</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shearer, Peter M.; Stark, Philip B.</p> <p>2012-01-01</p> <p>The recent elevated <span class="hlt">rate</span> of large <span class="hlt">earthquakes</span> has fueled concern that the underlying global <span class="hlt">rate</span> of <span class="hlt">earthquake</span> activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of large (magnitude M≥7) <span class="hlt">earthquakes</span> from 1900 to the present, after removing local clustering related to aftershocks. The global <span class="hlt">rate</span> of M≥8 <span class="hlt">earthquakes</span> has been at a record high roughly since 2004, but <span class="hlt">rates</span> have been almost as high before, and the <span class="hlt">rate</span> of smaller <span class="hlt">earthquakes</span> is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences—if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global <span class="hlt">rate</span> of large events. Together these facts suggest that the global risk of large <span class="hlt">earthquakes</span> is no higher today than it has been in the past. PMID:22184228</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JGRB..121.3609D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JGRB..121.3609D"><span>Collective properties of injection-induced <span class="hlt">earthquake</span> sequences: 1. <span class="hlt">Model</span> description and directivity bias</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dempsey, David; Suckale, Jenny</p> <p>2016-05-01</p> <p>Induced seismicity is of increasing concern for oil and gas, geothermal, and carbon sequestration operations, with several M > 5 events triggered in recent years. <span class="hlt">Modeling</span> plays an important role in understanding the causes of this seismicity and in constraining seismic hazard. Here we study the collective properties of induced <span class="hlt">earthquake</span> sequences and the physics underpinning them. In this first paper of a two-part series, we focus on the directivity ratio, which quantifies whether fault rupture is dominated by one (unilateral) or two (bilateral) propagating fronts. In a second paper, we focus on the spatiotemporal and magnitude-frequency distributions of induced seismicity. We develop a <span class="hlt">model</span> that couples a fracture mechanics description of 1-D fault rupture with fractal stress heterogeneity and the evolving pore pressure distribution around an injection well that triggers <span class="hlt">earthquakes</span>. The extent of fault rupture is calculated from the equations of motion for two tips of an expanding crack centered at the <span class="hlt">earthquake</span> hypocenter. Under tectonic loading conditions, our <span class="hlt">model</span> exhibits a preference for unilateral rupture and a normal distribution of hypocenter locations, two features that are consistent with seismological observations. On the other hand, catalogs of induced events when injection occurs directly onto a fault exhibit a bias toward ruptures that propagate toward the injection well. This bias is due to relatively favorable conditions for rupture that exist within the high-pressure plume. The strength of the directivity bias depends on a number of factors including the style of pressure buildup, the proximity of the fault to failure and event magnitude. For injection off a fault that triggers <span class="hlt">earthquakes</span>, the <span class="hlt">modeled</span> directivity bias is small and may be too weak for practical detection. For two hypothetical injection scenarios, we estimate the number of <span class="hlt">earthquake</span> observations required to detect directivity bias.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.nicee.org/wcee/','USGSPUBS'); return false;" href="http://www.nicee.org/wcee/"><span>Demand surge following <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Olsen, Anna H.</p> <p>2012-01-01</p> <p>Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a <span class="hlt">model</span> for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The <span class="hlt">model</span> showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to <span class="hlt">earthquakes</span>. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to <span class="hlt">earthquakes</span> are the exclusion of insurance coverage for <span class="hlt">earthquake</span> damage and possible concurrent causation of damage from an <span class="hlt">earthquake</span> followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale <span class="hlt">earthquakes</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005AGUFM.S23A0223D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005AGUFM.S23A0223D"><span>Building Loss Estimation for <span class="hlt">Earthquake</span> Insurance Pricing</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Durukal, E.; Erdik, M.; Sesetyan, K.; Demircioglu, M. B.; Fahjan, Y.; Siyahi, B.</p> <p>2005-12-01</p> <p>After the 1999 <span class="hlt">earthquakes</span> in Turkey several changes in the insurance sector took place. A compulsory <span class="hlt">earthquake</span> insurance scheme was introduced by the government. The reinsurance companies increased their <span class="hlt">rates</span>. Some even supended operations in the market. And, most important, the insurance companies realized the importance of portfolio analysis in shaping their future market strategies. The paper describes an <span class="hlt">earthquake</span> loss assessment methodology that can be used for insurance pricing and portfolio loss estimation that is based on our work esperience in the insurance market. The basic ingredients are probabilistic and deterministic regional site dependent <span class="hlt">earthquake</span> hazard, regional building inventory (and/or portfolio), building vulnerabilities associated with typical construction systems in Turkey and estimations of building replacement costs for different damage levels. Probable maximum and average annualized losses are estimated as the result of analysis. There is a two-level <span class="hlt">earthquake</span> insurance system in Turkey, the effect of which is incorporated in the algorithm: the national compulsory <span class="hlt">earthquake</span> insurance scheme and the private <span class="hlt">earthquake</span> insurance system. To buy private insurance one has to be covered by the national system, that has limited coverage. As a demonstration of the methodology we look at the case of Istanbul and use its building inventory data instead of a portfolio. A state-of-the-art time depent <span class="hlt">earthquake</span> hazard <span class="hlt">model</span> that portrays the increased <span class="hlt">earthquake</span> expectancies in Istanbul is used. Intensity and spectral displacement based vulnerability relationships are incorporated in the analysis. In particular we look at the uncertainty in the loss estimations that arise from the vulnerability relationships, and at the effect of the implemented repair cost ratios.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/20778699-nonextensive-models-earthquakes','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20778699-nonextensive-models-earthquakes"><span>Nonextensive <span class="hlt">models</span> for <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Silva, R.; Franca, G.S.; Vilar, C.S.</p> <p>2006-02-15</p> <p>We have revisited the fragment-asperity interaction <span class="hlt">model</span> recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the <span class="hlt">earthquake</span> energy and the size of fragment {epsilon}{proportional_to}r{sup 3}. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofisica.more » Although both approaches provide very similar values for the nonextensive parameter q, other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70026300','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70026300"><span>Slip <span class="hlt">rate</span> and <span class="hlt">earthquake</span> recurrence along the central Septentrional fault, North American-Caribbean plate boundary, Dominican Republic</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Prentice, C.S.; Mann, P.; Pena, L.R.; Burr, G.</p> <p>2003-01-01</p> <p>The Septentrional fault zone (SFZ) is the major North American-Caribbean, strike-slip, plate boundary fault at the longitude of eastern Hispaniola. The SFZ traverses the densely populated Cibao Valley of the Dominican Republic, forming a prominent scarp in alluvium. Our studies at four sites along the central SFZ are aimed at quantifying the late Quaternary behavior of this structure to better understand the seismic hazard it represents for the northeastern Caribbean. Our investigations of excavations at sites near Rio Cenovi show that the most recent ground-rupturing <span class="hlt">earthquake</span> along this fault in the north central Dominican Republic occurred between A.D. 1040 and A.D. 1230, and involved a minimum of ???4 m of left-lateral slip and 2.3 m of normal dip slip at that site. Our studies of offset stream terraces at two locations, Rio Juan Lopez and Rio Licey, provide late Holocene slip <span class="hlt">rate</span> estimates of 6-9 mm/yr and a maximum of 11-12 mm/yr, respectively, across the Septentrional fault. Combining these results gives a best estimate of 6-12 mm/yr for the slip <span class="hlt">rate</span> across the SFZ. Three excavations, two near Tenares and one at the Rio Licey site, yielded evidence for the occurrence of earlier prehistoric <span class="hlt">earthquakes</span>. Dates of strata associated with the penultimate event suggest that it occurred post-A.D. 30, giving a recurrence interval of 800-1200 years. These studies indicate that the SFZ has likely accumulated elastic strain sufficient to generate a major <span class="hlt">earthquake</span> during the more than 800 years since it last slipped and should be considered likely to produce a destructive future <span class="hlt">earthquake</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70192291','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70192291"><span>A comparison among observations and <span class="hlt">earthquake</span> simulator results for the allcal2 California fault <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak</p> <p>2012-01-01</p> <p>In order to understand <span class="hlt">earthquake</span> hazards we would ideally have a statistical description of <span class="hlt">earthquakes</span> for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based <span class="hlt">earthquake</span> simulators can generate arbitrarily long histories of <span class="hlt">earthquakes</span>; thus they can provide a statistically meaningful history of simulated <span class="hlt">earthquakes</span>. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip <span class="hlt">rates</span>, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four <span class="hlt">earthquake</span> simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFMNH31B1352G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFMNH31B1352G"><span><span class="hlt">Earthquake</span> <span class="hlt">Model</span> of the Middle East (EMME) Project: Active Fault Database for the Middle East Region</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gülen, L.; Wp2 Team</p> <p>2010-12-01</p> <p>The <span class="hlt">Earthquake</span> <span class="hlt">Model</span> of the Middle East (EMME) Project is a regional project of the umbrella GEM (Global <span class="hlt">Earthquake</span> <span class="hlt">Model</span>) project (http://www.emme-gem.org/). EMME project region includes Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major <span class="hlt">earthquakes</span> have occurred in this region over the years causing casualties in the millions. The EMME project will use PSHA approach and the existing source <span class="hlt">models</span> will be revised or modified by the incorporation of newly acquired data. More importantly the most distinguishing aspect of the EMME project from the previous ones will be its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that will permit continuous update, refinement, and analysis. A digital active fault map of the Middle East region is under construction in ArcGIS format. We are developing a database of fault parameters for active faults that are capable of generating <span class="hlt">earthquakes</span> above a threshold magnitude of Mw≥5.5. Similar to the WGCEP-2007 and UCERF-2 projects, the EMME project database includes information on the geometry and <span class="hlt">rates</span> of movement of faults in a “Fault Section Database”. The “Fault Section” concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far over 3,000 Fault Sections have been defined and parameterized for the Middle East region. A separate “Paleo-Sites Database” includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library that includes the pdf files of the relevant papers, reports is also being prepared. Another task of the WP-2 of the EMME project is to prepare</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.S33C4538S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.S33C4538S"><span><span class="hlt">Earthquake</span> Early Warning Beta Users: Java, <span class="hlt">Modeling</span>, and Mobile Apps</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.</p> <p>2014-12-01</p> <p><span class="hlt">Earthquake</span> Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an <span class="hlt">earthquake</span>. A demonstration <span class="hlt">earthquake</span> early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The beta users receive <span class="hlt">earthquake</span> information very rapidly in real-time and are providing feedback on their experiences of performance and potential uses within their organization. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. Actions could include: personal safety approaches, such as drop cover, and hold on; automated processes and procedures, such as opening elevator or fire stations doors; or situational awareness. Users are beginning to determine which policy and technological changes may need to be enacted, and funding requirements to implement their automated controls. The use of <span class="hlt">models</span> and mobile apps are beginning to augment the basic Java desktop applet. <span class="hlt">Modeling</span> allows beta users to test their early warning responses against various scenarios without having to wait for a real event. Mobile apps are also changing the possible response landscape, providing other avenues for people to receive information. All of these combine to improve business continuity and resiliency.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017GeoJI.211..335K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017GeoJI.211..335K"><span><span class="hlt">Earthquake</span> number forecasts testing</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kagan, Yan Y.</p> <p>2017-10-01</p> <p>We study the distributions of <span class="hlt">earthquake</span> numbers in two global <span class="hlt">earthquake</span> catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity <span class="hlt">rate</span>, tested by the Collaboratory for Study of <span class="hlt">Earthquake</span> Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the <span class="hlt">earthquake</span> number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe <span class="hlt">earthquake</span> occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of <span class="hlt">earthquake</span> numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large <span class="hlt">rate</span> values approaches the Gaussian law, therefore its skewness</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5493769','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5493769"><span>Understanding dynamic friction through spontaneously evolving laboratory <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rubino, V.; Rosakis, A. J.; Lapusta, N.</p> <p>2017-01-01</p> <p>Friction plays a key role in how ruptures unzip faults in the Earth’s crust and release waves that cause destructive shaking. Yet dynamic friction evolution is one of the biggest uncertainties in <span class="hlt">earthquake</span> science. Here we report on novel measurements of evolving local friction during spontaneously developing mini-<span class="hlt">earthquakes</span> in the laboratory, enabled by our ultrahigh speed full-field imaging technique. The technique captures the evolution of displacements, velocities and stresses of dynamic ruptures, whose rupture speed range from sub-Rayleigh to supershear. The observed friction has complex evolution, featuring initial velocity strengthening followed by substantial velocity weakening. Our measurements are consistent with <span class="hlt">rate</span>-and-state friction formulations supplemented with flash heating but not with widely used slip-weakening friction laws. This study develops a new approach for measuring local evolution of dynamic friction and has important implications for understanding <span class="hlt">earthquake</span> hazard since laws governing frictional resistance of faults are vital ingredients in physically-based predictive <span class="hlt">models</span> of the <span class="hlt">earthquake</span> source. PMID:28660876</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030020856','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030020856"><span>Crustal Deformation in Southcentral Alaska: The 1964 Prince William Sound <span class="hlt">Earthquake</span> Subduction Zone</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cohen, Steven C.; Freymueller, Jeffrey T.</p> <p>2003-01-01</p> <p>This article, for Advances in Geophysics, is a summary of crustal deformation studies in southcentral Alaska. In 1964, southcentral Alaska was struck by the largest <span class="hlt">earthquake</span> (moment magnitude 9.2) occurring in historical times in North America and the second largest <span class="hlt">earthquake</span> occurring in the world during the past century. Conventional and space-based geodetic measurements have revealed a complex temporal-spatial pattern of crustal movement. Numerical <span class="hlt">models</span> suggest that ongoing convergence between the North America and Pacific Plates, viscoelastic rebound, aseismic creep along the tectonic plate interface, and variable plate coupling all play important roles in controlling both the surface and subsurface movements. The geodetic data sets include tide-gauge observations that in some cases provide records back to the decades preceding the <span class="hlt">earthquake</span>, leveling data that span a few decades around the <span class="hlt">earthquake</span>, VLBI data from the late 1980s, and GPS data since the mid-1990s. Geologic data provide additional estimates of vertical movements and a chronology of large seismic events. Some of the important features that are revealed by the ensemble of studies that are reviewed in this paper include: (1) Crustal uplift in the region that subsided by up 2 m at the time of the <span class="hlt">earthquake</span> is as much as 1 m since the <span class="hlt">earthquake</span>. In the Turnagain Arm and Kenai Peninsula regions of southcentral Alaska, uplift <span class="hlt">rates</span> in the immediate aftermath of the <span class="hlt">earthquake</span> reached 150 mm/yr , but this rapid uplift decayed rapidly after the first few years following the <span class="hlt">earthquake</span>. (2) At some other locales, notably those away the middle of the coseismic rupture zone, postseismic uplift <span class="hlt">rates</span> were initially slower but the <span class="hlt">rates</span> decay over a longer time interval. At Kodiak Island, for example, the uplift <span class="hlt">rates</span> have been decreasing at a <span class="hlt">rate</span> of about 7mm/yr per decade. At yet other locations, the uplift <span class="hlt">rates</span> have shown little time dependence so far, but are thought not to be sustainable</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70195081','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70195081"><span>Dynamic rupture <span class="hlt">modeling</span> of the M7.2 2010 El Mayor-Cucapah <span class="hlt">earthquake</span>: Comparison with a geodetic <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Kyriakopoulos, Christos; Oglesby, David D.; Funning, Gareth J.; Ryan, Kenneth</p> <p>2017-01-01</p> <p>The 2010 Mw 7.2 El Mayor-Cucapah <span class="hlt">earthquake</span> is the largest event recorded in the broader Southern California-Baja California region in the last 18 years. Here we try to analyze primary features of this type of event by using dynamic rupture simulations based on a multifault interface and later compare our results with space geodetic <span class="hlt">models</span>. Our results show that starting from homogeneous prestress conditions, slip heterogeneity can be achieved as a result of variable dip angle along strike and the modulation imposed by step over segments. We also considered effects from a topographic free surface and find that although this does not produce significant first-order effects for this <span class="hlt">earthquake</span>, even a low topographic dome such as the Cucapah range can affect the rupture front pattern and fault slip <span class="hlt">rate</span>. Finally, we inverted available interferometric synthetic aperture radar data, using the same geometry as the dynamic rupture <span class="hlt">model</span>, and retrieved the space geodetic slip distribution that serves to constrain the dynamic rupture <span class="hlt">models</span>. The one to one comparison of the final fault slip pattern generated with dynamic rupture <span class="hlt">models</span> and the space geodetic inversion show good agreement. Our results lead us to the following conclusion: in a possible multifault rupture scenario, and if we have first-order geometry constraints, dynamic rupture <span class="hlt">models</span> can be very efficient in predicting large-scale slip heterogeneities that are important for the correct assessment of seismic hazard and the magnitude of future events. Our work contributes to understanding the complex nature of multifault systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SPIE.8345E..0QH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SPIE.8345E..0QH"><span>Experimental validation of finite element <span class="hlt">model</span> analysis of a steel frame in simulated post-<span class="hlt">earthquake</span> fire environments</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Ying; Bevans, W. J.; Xiao, Hai; Zhou, Zhi; Chen, Genda</p> <p>2012-04-01</p> <p>During or after an <span class="hlt">earthquake</span> event, building system often experiences large strains due to shaking effects as observed during recent <span class="hlt">earthquakes</span>, causing permanent inelastic deformation. In addition to the inelastic deformation induced by the <span class="hlt">earthquake</span> effect, the post-<span class="hlt">earthquake</span> fires associated with short fuse of electrical systems and leakage of gas devices can further strain the already damaged structures during the <span class="hlt">earthquakes</span>, potentially leading to a progressive collapse of buildings. Under these harsh environments, measurements on the involved building by various sensors could only provide limited structural health information. Finite element <span class="hlt">model</span> analysis, on the other hand, if validated by predesigned experiments, can provide detail structural behavior information of the entire structures. In this paper, a temperature dependent nonlinear 3-D finite element <span class="hlt">model</span> (FEM) of a one-story steel frame is set up by ABAQUS based on the cited material property of steel from EN 1993-1.2 and AISC manuals. The FEM is validated by testing the <span class="hlt">modeled</span> steel frame in simulated post-<span class="hlt">earthquake</span> environments. Comparisons between the FEM analysis and the experimental results show that the FEM predicts the structural behavior of the steel frame in post-<span class="hlt">earthquake</span> fire conditions reasonably. With experimental validations, the FEM analysis of critical structures could be continuously predicted for structures in these harsh environments for a better assistant to fire fighters in their rescue efforts and save fire victims.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PApGe.172.2305G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PApGe.172.2305G"><span>E-DECIDER: Using Earth Science Data and <span class="hlt">Modeling</span> Tools to Develop Decision Support for <span class="hlt">Earthquake</span> Disaster Response</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Glasscoe, Margaret T.; Wang, Jun; Pierce, Marlon E.; Yoder, Mark R.; Parker, Jay W.; Burl, Michael C.; Stough, Timothy M.; Granat, Robert A.; Donnellan, Andrea; Rundle, John B.; Ma, Yu; Bawden, Gerald W.; Yuen, Karen</p> <p>2015-08-01</p> <p><span class="hlt">Earthquake</span> Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing new capabilities for decision making utilizing remote sensing data and <span class="hlt">modeling</span> software to provide decision support for <span class="hlt">earthquake</span> disaster management and response. E-DECIDER incorporates the <span class="hlt">earthquake</span> forecasting methodology and geophysical <span class="hlt">modeling</span> tools developed through NASA's QuakeSim project. Remote sensing and geodetic data, in conjunction with <span class="hlt">modeling</span> and forecasting tools allows us to provide both long-term planning information for disaster management decision makers as well as short-term information following <span class="hlt">earthquake</span> events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). This in turn is delivered through standards-compliant web services for desktop and hand-held devices.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSeis..21.1001A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSeis..21.1001A"><span>Empirical <span class="hlt">models</span> for the prediction of ground motion duration for intraplate <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.</p> <p>2017-07-01</p> <p>Many empirical relationships for the <span class="hlt">earthquake</span> ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate <span class="hlt">earthquakes</span> to represent intraplate <span class="hlt">earthquakes</span>. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of <span class="hlt">earthquake</span> ground motion duration (i.e., significant and bracketed) with <span class="hlt">earthquake</span> magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled <span class="hlt">earthquake</span> ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive <span class="hlt">models</span> to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the <span class="hlt">earthquake</span>. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the <span class="hlt">earthquake</span>. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNH23A0231S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNH23A0231S"><span>Tsunami Source <span class="hlt">Modeling</span> of the 2015 Volcanic Tsunami <span class="hlt">Earthquake</span> near Torishima, South of Japan</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H.</p> <p>2017-12-01</p> <p>An abnormal <span class="hlt">earthquake</span> occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The <span class="hlt">earthquake</span>, which hereafter we call "the 2015 Torishima <span class="hlt">earthquake</span>," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the <span class="hlt">earthquake</span> can be regarded as a "tsunami <span class="hlt">earthquake</span>." In the region, similar tsunami <span class="hlt">earthquakes</span> were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 <span class="hlt">earthquake</span> were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For <span class="hlt">modeling</span> its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the <span class="hlt">model</span> of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26PSL.460...60F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26PSL.460...60F"><span><span class="hlt">Earthquake</span>-enhanced permeability - evidence from carbon dioxide release following the ML 3.5 <span class="hlt">earthquake</span> in West Bohemia</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fischer, T.; Matyska, C.; Heinicke, J.</p> <p>2017-02-01</p> <p>The West Bohemia/Vogtland region is characterized by <span class="hlt">earthquake</span> swarm activity and degassing of CO2 of mantle origin. A fast increase of CO2 flow <span class="hlt">rate</span> was observed 4 days after a ML 3.5 <span class="hlt">earthquake</span> in May 2014 in the Hartoušov mofette, 9 km from the epicentres. During the subsequent 150 days the flow reached sixfold of the original level, and has been slowly decaying until present. Similar behavior was observed during and after the swarm in 2008 pointing to a fault-valve mechanism in long-term. Here, we present the results of simulation of gas flow in a two dimensional <span class="hlt">model</span> of Earth's crust composed of a sealing layer at the hypocentre depth which is penetrated by the <span class="hlt">earthquake</span> fault and releases fluid from a relatively low-permeability lower crust. This simple <span class="hlt">model</span> is capable of explaining the observations, including the short travel time of the flow pulse from 8 km depth to the surface, long-term flow increase and its subsequent slow decay. Our <span class="hlt">model</span> is consistent with other analyse of the 2014 aftershocks which attributes their anomalous character to exponentially decreasing external fluid force. Our observations and <span class="hlt">model</span> hence track the fluid pressure pulse from depth where it was responsible for aftershocks triggering to the surface where a significant long-term increase of CO2 flow started 4 days later.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017GeoJI.210.1474M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017GeoJI.210.1474M"><span>On some methods for assessing <span class="hlt">earthquake</span> predictions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Molchan, G.; Romashkova, L.; Peresan, A.</p> <p>2017-09-01</p> <p>A regional approach to the problem of assessing <span class="hlt">earthquake</span> predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian <span class="hlt">earthquakes</span>. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic <span class="hlt">earthquake</span> forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted <span class="hlt">earthquakes</span> (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity <span class="hlt">rate</span> <span class="hlt">models</span>, even when applied to extensive data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoRL..45.1339B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoRL..45.1339B"><span>Volcanic Eruption Forecasts From Accelerating <span class="hlt">Rates</span> of Drumbeat Long-Period <span class="hlt">Earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bell, Andrew F.; Naylor, Mark; Hernandez, Stephen; Main, Ian G.; Gaunt, H. Elizabeth; Mothes, Patricia; Ruiz, Mario</p> <p>2018-02-01</p> <p>Accelerating <span class="hlt">rates</span> of quasiperiodic "drumbeat" long-period <span class="hlt">earthquakes</span> (LPs) are commonly reported before eruptions at andesite and dacite volcanoes, and promise insights into the nature of fundamental preeruptive processes and improved eruption forecasts. Here we apply a new Bayesian Markov chain Monte Carlo gamma point process methodology to investigate an exceptionally well-developed sequence of drumbeat LPs preceding a recent large vulcanian explosion at Tungurahua volcano, Ecuador. For more than 24 hr, LP <span class="hlt">rates</span> increased according to the inverse power law trend predicted by material failure theory, and with a retrospectively forecast failure time that agrees with the eruption onset within error. LPs resulted from repeated activation of a single characteristic source driven by accelerating loading, rather than a distributed failure process, showing that similar precursory trends can emerge from quite different underlying physics. Nevertheless, such sequences have clear potential for improving forecasts of eruptions at Tungurahua and analogous volcanoes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.8782A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.8782A"><span>Dynamic <span class="hlt">modeling</span> of normal faults of the 2016 Central Italy <span class="hlt">earthquake</span> sequence</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aochi, Hideo</p> <p>2017-04-01</p> <p>The <span class="hlt">earthquake</span> sequence of the Central Italy in 2016 are characterized mainly by the Mw6.0 24th August, Mw5.9 26th October and Mw6.4 30th October as well as two Mw5.4 <span class="hlt">earthquakes</span> (24th August, 26th October) (catalogue INGV). They all show normal faulting mechanisms corresponding to the Apennines's tectonics. They are aligned briefly along NNW-SSE axis, and they may not be on a single continuous fault plane. Therefore, dynamic rupture <span class="hlt">modeling</span> of sequences should be carried out supposing co-planar normal multiple segments. We apply a Boundary Domain Method (BDM, Goto and Bielak, GJI, 2008) coupling a boundary integral equation method and a domain-based method, namely a finite difference method in this study. The Mw6.0 24th August <span class="hlt">earthquake</span> is <span class="hlt">modeled</span>. We use the basic information of hypocenter position, focal mechanism and potential ruptured dimension from the INGV catalogue and Tinti et al., GRL, 2016), and begin with a simple condition (homogeneous boundary condition). From our preliminary simulations, it is shown that a uniformly extended rupture <span class="hlt">model</span> does not fit the near-field ground motions and localized heterogeneity would be required.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70021301','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70021301"><span>Premonitory slip and tidal triggering of <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Lockner, D.A.; Beeler, N.M.</p> <p>1999-01-01</p> <p>We have conducted a series of laboratory simulations of <span class="hlt">earthquakes</span> using granite cylinders containing precut bare fault surfaces at 50 MPa confining pressure. Axial shortening <span class="hlt">rates</span> between 10-4 and 10-6 mm/s were imposed to simulate tectonic loading. Average loading <span class="hlt">rate</span> was then modulated by the addition of a small-amplitude sine wave to simulate periodic loading due to Earth tides or other sources. The period of the modulating signal ranged from 10 to 10,000 s. For each combination of amplitude and period of the modulating signal, multiple stick-slip events were recorded to determine the degree of correlation between the timing of simulated <span class="hlt">earthquakes</span> and the imposed periodic loading function. Over the range of parameters studied, the degree of correlation of <span class="hlt">earthquakes</span> was most sensitive to the amplitude of the periodic loading, with weaker dependence on the period of oscillations and the average loading <span class="hlt">rate</span>. Accelerating premonitory slip was observed in these experiments and is a controlling factor in determining the conditions under which correlated events occur. In fact, some form of delayed failure is necessary to produce the observed correlations between simulated <span class="hlt">earthquake</span> timing and characteristics of the periodic loading function. The transition from strongly correlated to weakly correlated <span class="hlt">model</span> <span class="hlt">earthquake</span> populations occurred when the amplitude of the periodic loading was approximately 0.05 to 0.1 MPa shear stress (0.03 to 0.06 MPa Coulomb failure function). Lower-amplitude oscillations produced progressively lower correlation levels. Correlations between static stress increases and <span class="hlt">earthquake</span> aftershocks are found to degrade at similar stress levels. Typical stress variations due to Earth tides are only 0.001 to 0.004 MPa, so that the lack of correlation between Earth tides and <span class="hlt">earthquakes</span> is also consistent with our findings. A simple extrapolation of our results suggests that approximately 1% of midcrustal <span class="hlt">earthquakes</span> should be correlated with</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJMPC..2850092P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJMPC..2850092P"><span>A fragmentation <span class="hlt">model</span> of <span class="hlt">earthquake</span>-like behavior in internet access activity</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Paguirigan, Antonino A.; Angco, Marc Jordan G.; Bantang, Johnrob Y.</p> <p></p> <p>We present a fragmentation <span class="hlt">model</span> that generates almost any inverse power-law size distribution, including dual-scaled versions, consistent with the underlying dynamics of systems with <span class="hlt">earthquake</span>-like behavior. We apply the <span class="hlt">model</span> to explain the dual-scaled power-law statistics observed in an Internet access dataset that covers more than 32 million requests. The non-Poissonian statistics of the requested data sizes m and the amount of time τ needed for complete processing are consistent with the Gutenberg-Richter-law. Inter-event times δt between subsequent requests are also shown to exhibit power-law distributions consistent with the generalized Omori law. Thus, the dataset is similar to the <span class="hlt">earthquake</span> data except that two power-law regimes are observed. Using the proposed <span class="hlt">model</span>, we are able to identify underlying dynamics responsible in generating the observed dual power-law distributions. The <span class="hlt">model</span> is universal enough for its applicability to any physical and human dynamics that is limited by finite resources such as space, energy, time or opportunity.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16384429','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16384429"><span>Simulation of the Burridge-Knopoff <span class="hlt">model</span> of <span class="hlt">earthquakes</span> with variable range stress transfer.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xia, Junchao; Gould, Harvey; Klein, W; Rundle, J B</p> <p>2005-12-09</p> <p>Simple <span class="hlt">models</span> of <span class="hlt">earthquake</span> faults are important for understanding the mechanisms for their observed behavior, such as Gutenberg-Richter scaling and the relation between large and small events, which is the basis for various forecasting methods. Although cellular automaton <span class="hlt">models</span> have been studied extensively in the long-range stress transfer limit, this limit has not been studied for the Burridge-Knopoff <span class="hlt">model</span>, which includes more realistic friction forces and inertia. We find that the latter <span class="hlt">model</span> with long-range stress transfer exhibits qualitatively different behavior than both the long-range cellular automaton <span class="hlt">models</span> and the usual Burridge-Knopoff <span class="hlt">model</span> with nearest-neighbor springs, depending on the nature of the velocity-weakening friction force. These results have important implications for our understanding of <span class="hlt">earthquakes</span> and other driven dissipative systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003AGUFM.G22E..03F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003AGUFM.G22E..03F"><span>Long-term Postseismic Deformation Following the 1964 Alaska <span class="hlt">Earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Freymueller, J. T.; Cohen, S. C.; Hreinsdöttir, S.; Suito, H.</p> <p>2003-12-01</p> <p>Geodetic data provide a rich data set describing the postseismic deformation that followed the 1964 Alaska <span class="hlt">earthquake</span> (Mw 9.2). This is particularly true for vertical deformation, since tide gauges and leveling surveys provide extensive spatial coverage. Leveling was carried out over all of the major roads of Alaska in 1964-65, and over the last several years we have resurveyed an extensive data set using GPS. Along Turnagain Arm of Cook Inlet, south of Anchorage, a trench-normal profile was surveyed repeatedly over the first decade after the <span class="hlt">earthquake</span>, and many of these sites have been surveyed with GPS. After using a geoid <span class="hlt">model</span> to correct for the difference between geometric and orthometric heights, the leveling+GPS surveys reveal up to 1.25 meters of uplift since 1964. The largest uplifts are concentrated in the northern part of the Kenai Peninsula, SW of Turnagain Arm. In some places, steep gradients in the cumulative uplift measurements point to a very shallow source for the deformation. The average 1964-late 1990s uplift <span class="hlt">rates</span> were substantially higher than the present-day uplift <span class="hlt">rates</span>, which rarely exceed 10 mm/yr. Both leveling and tide gauge data document a decay in uplift <span class="hlt">rate</span> over time as the postseismic signal decreases. However, even today the postseismic deformation represents a substantial portion of the total observe deformation signal, illustrating that very long-lived postseismic deformation is an important element of the subduction zone <span class="hlt">earthquake</span> cycle for the very largest <span class="hlt">earthquakes</span>. This is in contrast to much smaller events, such as M~8 <span class="hlt">earthquakes</span>, for which postseismic deformation in many cases decays within a few years. This suggests that the very largest <span class="hlt">earthquakes</span> may excite different processes than smaller events.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H43H1757X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H43H1757X"><span><span class="hlt">Modeling</span> Channel Movement Response to Rainfall Variability and Potential Threats to Post-<span class="hlt">earthquake</span> Reconstruction</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xie, J.; Wang, M.; Liu, K.</p> <p>2017-12-01</p> <p>The 2008 Wenchuan Ms 8.0 <span class="hlt">earthquake</span> caused overwhelming destruction to vast mountains areas in Sichuan province. Numerous seismic landslides damaged the forest and vegetation cover, and caused substantial loose sediment piling up in the valleys. The movement and fill-up of loose materials led to riverbeds aggradation, thus made the <span class="hlt">earthquake</span>-struck area more susceptible to flash floods with increasing frequency and intensity of extreme rainfalls. This study investigated the response of sediment and river channel evolution to different rainfall scenarios after the Wenchuan <span class="hlt">earthquake</span>. The study area was chosen in a catchment affected by the <span class="hlt">earthquake</span> in Northeast Sichuan province, China. We employed the landscape evolution <span class="hlt">model</span> CAESAR-lisflood to explore the material migration rules and then assessed the potential effects under two rainfall scenarios. The <span class="hlt">model</span> parameters were calibrated using the 2013 extreme rainfall event, and the experimental rainfall scenarios were of different intensity and frequency over a 10-year period. The results indicated that CAESAR-lisflood was well adapted to replicate the sediment migration, particularly the fluvial processes after <span class="hlt">earthquake</span>. With respect to the effects of rainfall intensity, the erosion severity in upstream gullies and the deposition severity in downstream channels, correspondingly increased with the increasing intensity of extreme rainfalls. The <span class="hlt">modelling</span> results showed that buildings in the catchment suffered from flash floods increased by more than a quarter from the normal to the enhanced rainfall scenarios in ten years, which indicated a potential threat to the exposures nearby the river channel, in the context of climate change. Simulation on landscape change is of great significance, and contributes to early warning of potential geological risks after <span class="hlt">earthquake</span>. Attention on the high risk area by local government and the public is highly suggested in our study.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.S42B..08A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.S42B..08A"><span><span class="hlt">Earthquake</span> Hazard and Risk in New Zealand</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Apel, E. V.; Nyst, M.; Fitzenz, D. D.; Molas, G.</p> <p>2014-12-01</p> <p>To quantify risk in New Zealand we examine the impact of updating the seismic hazard <span class="hlt">model</span>. The previous RMS New Zealand hazard <span class="hlt">model</span> is based on the 2002 probabilistic seismic hazard maps for New Zealand (Stirling et al., 2002). The 2015 RMS <span class="hlt">model</span>, based on Stirling et al., (2012) will update several key source parameters. These updates include: implementation a new set of crustal faults including multi-segment ruptures, updating the subduction zone geometry and reccurrence <span class="hlt">rate</span> and implementing new background <span class="hlt">rates</span> and a robust methodology for <span class="hlt">modeling</span> background <span class="hlt">earthquake</span> sources. The number of crustal faults has increased by over 200 from the 2002 <span class="hlt">model</span>, to the 2012 <span class="hlt">model</span> which now includes over 500 individual fault sources. This includes the additions of many offshore faults in northern, east-central, and southwest regions. We also use the recent data to update the source geometry of the Hikurangi subduction zone (Wallace, 2009; Williams et al., 2013). We compare hazard changes in our updated <span class="hlt">model</span> with those from the previous version. Changes between the two maps are discussed as well as the drivers for these changes. We examine the impact the hazard <span class="hlt">model</span> changes have on New Zealand <span class="hlt">earthquake</span> risk. Considered risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of <span class="hlt">earthquake</span> insurance (and premium levels), and the loss exceedance probability curve used by insurers to address their solvency and manage their portfolio risk. We analyze risk profile changes in areas with large population density and for structures of economic and financial importance. New Zealand is interesting in that the city with the majority of the risk exposure in the country (Auckland) lies in the region of lowest hazard, where we don't have a lot of information about the location of faults and distributed seismicity is <span class="hlt">modeled</span> by averaged Mw-frequency relationships on area sources. Thus small changes to the background <span class="hlt">rates</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011GeoJI.187..225R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011GeoJI.187..225R"><span><span class="hlt">Earthquake</span> precursors: activation or quiescence?</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rundle, John B.; Holliday, James R.; Yoder, Mark; Sachs, Michael K.; Donnellan, Andrea; Turcotte, Donald L.; Tiampo, Kristy F.; Klein, William; Kellogg, Louise H.</p> <p>2011-10-01</p> <p>We discuss the long-standing question of whether the probability for large <span class="hlt">earthquake</span> occurrence (magnitudes m > 6.0) is highest during time periods of smaller event activation, or highest during time periods of smaller event quiescence. The physics of the activation <span class="hlt">model</span> are based on an idea from the theory of nucleation, that a small magnitude <span class="hlt">earthquake</span> has a finite probability of growing into a large <span class="hlt">earthquake</span>. The physics of the quiescence <span class="hlt">model</span> is based on the idea that the occurrence of smaller <span class="hlt">earthquakes</span> (here considered as magnitudes m > 3.5) may be due to a mechanism such as critical slowing down, in which fluctuations in systems with long-range interactions tend to be suppressed prior to large nucleation events. To illuminate this question, we construct two end-member forecast <span class="hlt">models</span> illustrating, respectively, activation and quiescence. The activation <span class="hlt">model</span> assumes only that activation can occur, either via aftershock nucleation or triggering, but expresses no choice as to which mechanism is preferred. Both of these <span class="hlt">models</span> are in fact a means of filtering the seismicity time-series to compute probabilities. Using 25 yr of data from the California-Nevada catalogue of <span class="hlt">earthquakes</span>, we show that of the two <span class="hlt">models</span>, activation and quiescence, the latter appears to be the better <span class="hlt">model</span>, as judged by backtesting (by a slight but not significant margin). We then examine simulation data from a topologically realistic <span class="hlt">earthquake</span> <span class="hlt">model</span> for California seismicity, Virtual California. This <span class="hlt">model</span> includes not only <span class="hlt">earthquakes</span> produced from increases in stress on the fault system, but also background and off-fault seismicity produced by a BASS-ETAS driving mechanism. Applying the activation and quiescence forecast <span class="hlt">models</span> to the simulated data, we come to the opposite conclusion. Here, the activation forecast <span class="hlt">model</span> is preferred to the quiescence <span class="hlt">model</span>, presumably due to the fact that the BASS component of the <span class="hlt">model</span> is essentially a <span class="hlt">model</span> for activated seismicity. These</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70016170','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70016170"><span>Equivalent strike-slip <span class="hlt">earthquake</span> cycles in half-space and lithosphere-asthenosphere earth <span class="hlt">models</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Savage, J.C.</p> <p>1990-01-01</p> <p>By virtue of the images used in the dislocation solution, the deformation at the free surface produced throughout the <span class="hlt">earthquake</span> cycle by slippage on a long strike-slip fault in an Earth <span class="hlt">model</span> consisting of an elastic plate (lithosphere) overlying a viscoelastic half-space (asthenosphere) can be duplicated by prescribed slip on a vertical fault embedded in an elastic half-space. Inversion of 1973-1988 geodetic measurements of deformation across the segment of the San Andreas fault in the Transverse Ranges north of Los Angeles for the half-space equivalent slip distribution suggests no significant slip on the fault above 30 km and a uniform slip <span class="hlt">rate</span> of 36 mm/yr below 30 km. One equivalent lithosphere-asthenosphere <span class="hlt">model</span> would have a 30-km thick lithosphere and an asthenosphere relaxation time greater than 33 years, but other <span class="hlt">models</span> are possible. -from Author</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70028804','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70028804"><span>Three-dimensional compressional wavespeed <span class="hlt">model</span>, <span class="hlt">earthquake</span> relocations, and focal mechanisms for the Parkfield, California, region</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Thurber, C.; Zhang, H.; Waldhauser, F.; Hardebeck, J.; Michael, A.; Eberhart-Phillips, D.</p> <p>2006-01-01</p> <p>We present a new three-dimensional (3D) compressional vvavespeed (V p) <span class="hlt">model</span> for the Parkfield region, taking advantage of the recent seismicity associated with the 2003 San Simeon and 2004 Parkfield <span class="hlt">earthquake</span> sequences to provide increased <span class="hlt">model</span> resolution compared to the work of Eberhart-Phillips and Michael (1993) (EPM93). Taking the EPM93 3D <span class="hlt">model</span> as our starting <span class="hlt">model</span>, we invert the arrival-time data from about 2100 <span class="hlt">earthquakes</span> and 250 shots recorded on both permanent network and temporary stations in a region 130 km northeast-southwest by 120 km northwest-southeast. We include catalog picks and cross-correlation and catalog differential times in the inversion, using the double-difference tomography method of Zhang and Thurber (2003). The principal Vp features reported by EPM93 and Michelini and McEvilly (1991) are recovered, but with locally improved resolution along the San Andreas Fault (SAF) and near the active-source profiles. We image the previously identified strong wavespeed contrast (faster on the southwest side) across most of the length of the SAF, and we also improve the image of a high Vp body on the northeast side of the fault reported by EPM93. This narrow body is at about 5- to 12-km depth and extends approximately from the locked section of the SAP to the town of Parkfield. The footwall of the thrust fault responsible for the 1983 Coalinga <span class="hlt">earthquake</span> is imaged as a northeast-dipping high wavespeed body. In between, relatively low wavespeeds (<5 km/sec) extend to as much as 10-km depth. We use this <span class="hlt">model</span> to derive absolute locations for about 16,000 <span class="hlt">earthquakes</span> from 1966 to 2005 and high-precision double-difference locations for 9,000 <span class="hlt">earthquakes</span> from 1984 to 2005, and also to determine focal mechanisms for 446 <span class="hlt">earthquakes</span>. These <span class="hlt">earthquake</span> locations and mechanisms show that the seismogenic fault is a simple planar structure. The aftershock sequence of the 2004 mainshock concentrates into the same structures defined by the pre-2004 seismicity</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3364293','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3364293"><span><span class="hlt">Modelling</span> Psychological Responses to the Great East Japan <span class="hlt">Earthquake</span> and Nuclear Incident</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Goodwin, Robin; Takahashi, Masahito; Sun, Shaojing; Gaines, Stanley O.</p> <p>2012-01-01</p> <p>The Great East Japan (Tōhoku/Kanto) <span class="hlt">earthquake</span> of March 2011was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have <span class="hlt">modelled</span> individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan <span class="hlt">earthquake</span> and nuclear incident, with data collected 11–13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the <span class="hlt">earthquake</span> and leaking nuclear plants), Tokyo/Chiba (approximately 220 km from the nuclear plants), and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants). Results indicated significant regional differences in risk perception, with greater concern over <span class="hlt">earthquake</span> risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about <span class="hlt">earthquake</span> and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived <span class="hlt">earthquake</span> and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan). The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events. PMID:22666380</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26PSL.477...84S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26PSL.477...84S"><span>Frictional stability and <span class="hlt">earthquake</span> triggering during fluid pressure stimulation of an experimental fault</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Scuderi, M. M.; Collettini, C.; Marone, C.</p> <p>2017-11-01</p> <p>It is widely recognized that the significant increase of M > 3.0 <span class="hlt">earthquakes</span> in Western Canada and the Central United States is related to underground fluid injection. Following injection, fluid overpressure lubricates the fault and reduces the effective normal stress that holds the fault in place, promoting slip. Although, this basic physical mechanism for <span class="hlt">earthquake</span> triggering and fault slip is well understood, there are many open questions related to induced seismicity. <span class="hlt">Models</span> of <span class="hlt">earthquake</span> nucleation based on <span class="hlt">rate</span>- and state-friction predict that fluid overpressure should stabilize fault slip rather than trigger <span class="hlt">earthquakes</span>. To address this controversy, we conducted laboratory creep experiments to monitor fault slip evolution at constant shear stress while the effective normal stress was systematically reduced via increasing fluid pressure. We sheared layers of carbonate-bearing fault gouge in a double direct shear configuration within a true-triaxial pressure vessel. We show that fault slip evolution is controlled by the stress state acting on the fault and that fluid pressurization can trigger dynamic instability even in cases of <span class="hlt">rate</span> strengthening friction, which should favor aseismic creep. During fluid pressurization, when shear and effective normal stresses reach the failure condition, accelerated creep occurs in association with fault dilation; further pressurization leads to an exponential acceleration with fault compaction and slip localization. Our work indicates that fault weakening induced by fluid pressurization can overcome <span class="hlt">rate</span> strengthening friction resulting in fast acceleration and <span class="hlt">earthquake</span> slip. Our work points to modifications of the standard <span class="hlt">model</span> for <span class="hlt">earthquake</span> nucleation to account for the effect of fluid overpressure and to accurately predict the seismic risk associated with fluid injection.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70027562','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70027562"><span>Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Michael, A.J.</p> <p>2005-01-01</p> <p>The Brownian Passage Time (BPT) <span class="hlt">model</span> for <span class="hlt">earthquake</span> recurrence is modified to include transient deformation due to either viscoelasticity or deep post seismic slip. Both of these processes act to increase the <span class="hlt">rate</span> of loading on the seismogenic fault for some time after a large event. To approximate these effects, a decaying exponential term is added to the BPT <span class="hlt">model</span>'s uniform loading term. The resulting interevent time distributions remain approximately lognormal, but the balance between the level of noise (e.g., unknown fault interactions) and the coefficient of variability of the interevent time distribution changes depending on the shape of the loading function. For a given level of noise in the loading process, transient deformation has the effect of increasing the coefficient of variability of <span class="hlt">earthquake</span> interevent times. Conversely, the level of noise needed to achieve a given level of variability is reduced when transient deformation is included. Using less noise would then increase the effect of known fault interactions <span class="hlt">modeled</span> as stress or strain steps because they would be larger with respect to the noise. If we only seek to estimate the shape of the interevent time distribution from observed <span class="hlt">earthquake</span> occurrences, then the use of a transient deformation <span class="hlt">model</span> will not dramatically change the results of a probability study because a similar shaped distribution can be achieved with either uniform or transient loading functions. However, if the goal is to estimate <span class="hlt">earthquake</span> probabilities based on our increasing understanding of the seismogenic process, including <span class="hlt">earthquake</span> interactions, then including transient deformation is important to obtain accurate results. For example, a loading curve based on the 1906 <span class="hlt">earthquake</span>, paleoseismic observations of prior events, and observations of recent deformation in the San Francisco Bay region produces a 40% greater variability in <span class="hlt">earthquake</span> recurrence than a uniform loading <span class="hlt">model</span> with the same noise level.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CoTPh..69..280L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CoTPh..69..280L"><span>Self-Organized Criticality in an Anisotropic <span class="hlt">Earthquake</span> <span class="hlt">Model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Bin-Quan; Wang, Sheng-Jun</p> <p>2018-03-01</p> <p>We have made an extensive numerical study of a modified <span class="hlt">model</span> proposed by Olami, Feder, and Christensen to describe <span class="hlt">earthquake</span> behavior. Two situations were considered in this paper. One situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly not averagely and keeps itself to zero. The other situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly and keeps some energy for itself instead of reset to zero. Different boundary conditions were considered as well. By analyzing the distribution of <span class="hlt">earthquake</span> sizes, we found that self-organized criticality can be excited only in the conservative case or the approximate conservative case in the above situations. Some evidence indicated that the critical exponent of both above situations and the original OFC <span class="hlt">model</span> tend to the same result in the conservative case. The only difference is that the avalanche size in the original <span class="hlt">model</span> is bigger. This result may be closer to the real world, after all, every crust plate size is different. Supported by National Natural Science Foundation of China under Grant Nos. 11675096 and 11305098, the Fundamental Research Funds for the Central Universities under Grant No. GK201702001, FPALAB-SNNU under Grant No. 16QNGG007, and Interdisciplinary Incubation Project of SNU under Grant No. 5</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19890328','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19890328"><span>Long aftershock sequences within continents and implications for <span class="hlt">earthquake</span> hazard assessment.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Stein, Seth; Liu, Mian</p> <p>2009-11-05</p> <p>One of the most powerful features of plate tectonics is that the known plate motions give insight into both the locations and average recurrence interval of future large <span class="hlt">earthquakes</span> on plate boundaries. Plate tectonics gives no insight, however, into where and when <span class="hlt">earthquakes</span> will occur within plates, because the interiors of ideal plates should not deform. As a result, within plate interiors, assessments of <span class="hlt">earthquake</span> hazards rely heavily on the assumption that the locations of small <span class="hlt">earthquakes</span> shown by the short historical record reflect continuing deformation that will cause future large <span class="hlt">earthquakes</span>. Here, however, we show that many of these recent <span class="hlt">earthquakes</span> are probably aftershocks of large <span class="hlt">earthquakes</span> that occurred hundreds of years ago. We present a simple <span class="hlt">model</span> predicting that the length of aftershock sequences varies inversely with the <span class="hlt">rate</span> at which faults are loaded. Aftershock sequences within the slowly deforming continents are predicted to be significantly longer than the decade typically observed at rapidly loaded plate boundaries. These predictions are in accord with observations. So the common practice of treating continental <span class="hlt">earthquakes</span> as steady-state seismicity overestimates the hazard in presently active areas and underestimates it elsewhere.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70159632','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70159632"><span><span class="hlt">Rates</span> and patterns of surface deformation from laser scanning following the South Napa <span class="hlt">earthquake</span>, California</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>DeLong, Stephen B.; Lienkaemper, James J.; Pickering, Alexandra J; Avdievitch, Nikita N.</p> <p>2015-01-01</p> <p>The A.D. 2014 M6.0 South Napa <span class="hlt">earthquake</span>, despite its moderate magnitude, caused significant damage to the Napa Valley in northern California (USA). Surface rupture occurred along several mapped and unmapped faults. Field observations following the <span class="hlt">earthquake</span> indicated that the magnitude of postseismic surface slip was likely to approach or exceed the maximum coseismic surface slip and as such presented ongoing hazard to infrastructure. Using a laser scanner, we monitored postseismic deformation in three dimensions through time along 0.5 km of the main surface rupture. A key component of this study is the demonstration of proper alignment of repeat surveys using point cloud–based methods that minimize error imposed by both local survey errors and global navigation satellite system georeferencing errors. Using solid <span class="hlt">modeling</span> of natural and cultural features, we quantify dextral postseismic displacement at several hundred points near the main fault trace. We also quantify total dextral displacement of initially straight cultural features. Total dextral displacement from both coseismic displacement and the first 2.5 d of postseismic displacement ranges from 0.22 to 0.29 m. This range increased to 0.33–0.42 m at 59 d post-<span class="hlt">earthquake</span>. Furthermore, we estimate up to 0.15 m of vertical deformation during the first 2.5 d post-<span class="hlt">earthquake</span>, which then increased by ∼0.02 m at 59 d post-<span class="hlt">earthquake</span>. This vertical deformation is not expressed as a distinct step or scarp at the fault trace but rather as a broad up-to-the-west zone of increasing elevation change spanning the fault trace over several tens of meters, challenging common notions about fault scarp development in strike-slip systems. Integrating these analyses provides three-dimensional mapping of surface deformation and identifies spatial variability in slip along the main fault trace that we attribute to distributed slip via subtle block rotation. These results indicate the benefits of laser scanner surveys along</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.woodheadpublishing.com/en/book.aspx?bookID=2497','USGSPUBS'); return false;" href="http://www.woodheadpublishing.com/en/book.aspx?bookID=2497"><span>Strategies for rapid global <span class="hlt">earthquake</span> impact estimation: the Prompt Assessment of Global <span class="hlt">Earthquakes</span> for Response (PAGER) system</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Jaiswal, Kishor; Wald, D.J.</p> <p>2013-01-01</p> <p>This chapter summarizes the state-of-the-art for rapid <span class="hlt">earthquake</span> impact estimation. It details the needs and challenges associated with quick estimation of <span class="hlt">earthquake</span> losses following global <span class="hlt">earthquakes</span>, and provides a brief literature review of various approaches that have been used in the past. With this background, the chapter introduces the operational <span class="hlt">earthquake</span> loss estimation system developed by the U.S. Geological Survey (USGS) known as PAGER (for Prompt Assessment of Global <span class="hlt">Earthquakes</span> for Response). It also details some of the ongoing developments of PAGER’s loss estimation <span class="hlt">models</span> to better supplement the operational empirical <span class="hlt">models</span>, and to produce value-added web content for a variety of PAGER users.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S53F..05L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S53F..05L"><span>Imbricated slip <span class="hlt">rate</span> processes during slow slip transients imaged by low-frequency <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lengliné, O.; Frank, W.; Marsan, D.; Ampuero, J. P.</p> <p>2017-12-01</p> <p>Low Frequency <span class="hlt">Earthquakes</span> (LFEs) often occur in conjunction with transient strain episodes, or Slow Slip Events (SSEs), in subduction zones. Their focal mechanism and location consistent with shear failure on the plate interface argue for a <span class="hlt">model</span> where LFEs are discrete dynamic ruptures in an otherwise slowly slipping interface. SSEs are mostly observed by surface geodetic instruments with limited resolution and it is likely that only the largest ones are detected. The time synchronization of LFEs and SSEs suggests that we could use the recorded LFEs to constrain the evolution of SSEs, and notably of the geodetically-undetected small ones. However, inferring slow slip <span class="hlt">rate</span> from the temporal evolution of LFE activity is complicated by the strong temporal clustering of LFEs. Here we apply dedicated statistical tools to retrieve the temporal evolution of SSE slip <span class="hlt">rates</span> from the time history of LFE occurrences in two subduction zones, Mexico and Cascadia, and in the deep portion of the San Andreas fault at Parkfield. We find temporal characteristics of LFEs that are similar across these three different regions. The longer term episodic slip transients present in these datasets show a slip <span class="hlt">rate</span> decay with time after the passage of the SSE front possibly as t-1/4. They are composed of multiple short term transients with steeper slip <span class="hlt">rate</span> decay as t-α with α between 1.4 and 2. We also find that the maximum slip <span class="hlt">rate</span> of SSEs has a continuous distribution. Our results indicate that creeping faults host intermittent deformation at various scales resulting from the imbricated occurrence of numerous slow slip events of various amplitudes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26PSL.476..122L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26PSL.476..122L"><span>Imbricated slip <span class="hlt">rate</span> processes during slow slip transients imaged by low-frequency <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lengliné, O.; Frank, W. B.; Marsan, D.; Ampuero, J.-P.</p> <p>2017-10-01</p> <p>Low Frequency <span class="hlt">Earthquakes</span> (LFEs) often occur in conjunction with transient strain episodes, or Slow Slip Events (SSEs), in subduction zones. Their focal mechanism and location consistent with shear failure on the plate interface argue for a <span class="hlt">model</span> where LFEs are discrete dynamic ruptures in an otherwise slowly slipping interface. SSEs are mostly observed by surface geodetic instruments with limited resolution and it is likely that only the largest ones are detected. The time synchronization of LFEs and SSEs suggests that we could use the recorded LFEs to constrain the evolution of SSEs, and notably of the geodetically-undetected small ones. However, inferring slow slip <span class="hlt">rate</span> from the temporal evolution of LFE activity is complicated by the strong temporal clustering of LFEs. Here we apply dedicated statistical tools to retrieve the temporal evolution of SSE slip <span class="hlt">rates</span> from the time history of LFE occurrences in two subduction zones, Mexico and Cascadia, and in the deep portion of the San Andreas fault at Parkfield. We find temporal characteristics of LFEs that are similar across these three different regions. The longer term episodic slip transients present in these datasets show a slip <span class="hlt">rate</span> decay with time after the passage of the SSE front possibly as t - 1 / 4. They are composed of multiple short term transients with steeper slip <span class="hlt">rate</span> decay as t-α with α between 1.4 and 2. We also find that the maximum slip <span class="hlt">rate</span> of SSEs has a continuous distribution. Our results indicate that creeping faults host intermittent deformation at various scales resulting from the imbricated occurrence of numerous slow slip events of various amplitudes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70116796','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70116796"><span>Continuing megathrust <span class="hlt">earthquake</span> potential in Chile after the 2014 Iquique <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Hayes, Gavin P.; Herman, Matthew W.; Barnhart, William D.; Furlong, Kevin P.; Riquelme, Sebástian; Benz, Harley M.; Bergman, Eric; Barrientos, Sergio; Earle, Paul S.; Samsonov, Sergey</p> <p>2014-01-01</p> <p>The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past <span class="hlt">earthquakes</span> (see, for example, ref. 2) and is useful for qualitatively describing where large <span class="hlt">earthquakes</span> might occur. A large <span class="hlt">earthquake</span> had been expected in the subduction zone adjacent to northern Chile which had not ruptured in a megathrust <span class="hlt">earthquake</span> since a M ~8.8 event in 1877. On 1 April 2014 a M 8.2 <span class="hlt">earthquake</span> occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March–April 2014 Iquique sequence, including analyses of <span class="hlt">earthquake</span> relocations, moment tensors, finite fault <span class="hlt">models</span>, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the <span class="hlt">earthquake</span> that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust <span class="hlt">earthquakes</span> will occur to the south and potentially to the north of the 2014 Iquique sequence.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25119028','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25119028"><span>Continuing megathrust <span class="hlt">earthquake</span> potential in Chile after the 2014 Iquique <span class="hlt">earthquake</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hayes, Gavin P; Herman, Matthew W; Barnhart, William D; Furlong, Kevin P; Riquelme, Sebástian; Benz, Harley M; Bergman, Eric; Barrientos, Sergio; Earle, Paul S; Samsonov, Sergey</p> <p>2014-08-21</p> <p>The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past <span class="hlt">earthquakes</span> (see, for example, ref. 2) and is useful for qualitatively describing where large <span class="hlt">earthquakes</span> might occur. A large <span class="hlt">earthquake</span> had been expected in the subduction zone adjacent to northern Chile, which had not ruptured in a megathrust <span class="hlt">earthquake</span> since a M ∼8.8 event in 1877. On 1 April 2014 a M 8.2 <span class="hlt">earthquake</span> occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March-April 2014 Iquique sequence, including analyses of <span class="hlt">earthquake</span> relocations, moment tensors, finite fault <span class="hlt">models</span>, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the <span class="hlt">earthquake</span> that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust <span class="hlt">earthquakes</span> will occur to the south and potentially to the north of the 2014 Iquique sequence.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29261748','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29261748"><span>Children's emotional experience two years after an <span class="hlt">earthquake</span>: An exploration of knowledge of <span class="hlt">earthquakes</span> and associated emotions.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Raccanello, Daniela; Burro, Roberto; Hall, Rob</p> <p>2017-01-01</p> <p>We explored whether and how the exposure to a natural disaster such as the 2012 Emilia Romagna <span class="hlt">earthquake</span> affected the development of children's emotional competence in terms of understanding, regulating, and expressing emotions, after two years, when compared with a control group not exposed to the <span class="hlt">earthquake</span>. We also examined the role of class level and gender. The sample included two groups of children (n = 127) attending primary school: The experimental group (n = 65) experienced the 2012 Emilia Romagna <span class="hlt">earthquake</span>, while the control group (n = 62) did not. The data collection took place two years after the <span class="hlt">earthquake</span>, when children were seven or ten-year-olds. Beyond assessing the children's understanding of emotions and regulating abilities with standardized instruments, we employed semi-structured interviews to explore their knowledge of <span class="hlt">earthquakes</span> and associated emotions, and a structured task on the intensity of some target emotions. We applied Generalized Linear Mixed <span class="hlt">Models</span>. Exposure to the <span class="hlt">earthquake</span> did not influence the understanding and regulation of emotions. The understanding of emotions varied according to class level and gender. Knowledge of <span class="hlt">earthquakes</span>, emotional language, and emotions associated with <span class="hlt">earthquakes</span> were, respectively, more complex, frequent, and intense for children who had experienced the <span class="hlt">earthquake</span>, and at increasing ages. Our data extend the generalizability of theoretical <span class="hlt">models</span> on children's psychological functioning following disasters, such as the dose-response <span class="hlt">model</span> and the organizational-developmental <span class="hlt">model</span> for child resilience, and provide further knowledge on children's emotional resources related to natural disasters, as a basis for planning educational prevention programs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.212.1331N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.212.1331N"><span><span class="hlt">Earthquake</span> triggering in southeast Africa following the 2012 Indian Ocean <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Neves, Miguel; Custódio, Susana; Peng, Zhigang; Ayorinde, Adebayo</p> <p>2018-02-01</p> <p>In this paper we present evidence of <span class="hlt">earthquake</span> dynamic triggering in southeast Africa. We analysed seismic waveforms recorded at 53 broad-band and short-period stations in order to identify possible increases in the <span class="hlt">rate</span> of microearthquakes and tremor due to the passage of teleseismic waves generated by the Mw8.6 2012 Indian Ocean <span class="hlt">earthquake</span>. We found evidence of triggered local <span class="hlt">earthquakes</span> and no evidence of triggered tremor in the region. We assessed the statistical significance of the increase in the number of local <span class="hlt">earthquakes</span> using β-statistics. Statistically significant dynamic triggering of local <span class="hlt">earthquakes</span> was observed at 7 out of the 53 analysed stations. Two of these stations are located in the northeast coast of Madagascar and the other five stations are located in the Kaapvaal Craton, southern Africa. We found no evidence of dynamically triggered seismic activity in stations located near the structures of the East African Rift System. Hydrothermal activity exists close to the stations that recorded dynamic triggering, however, it also exists near the East African Rift System structures where no triggering was observed. Our results suggest that factors other than solely tectonic regime and geothermalism are needed to explain the mechanisms that underlie <span class="hlt">earthquake</span> triggering.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70022786','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70022786"><span>On <span class="hlt">rate</span>-state and Coulomb failure <span class="hlt">models</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Gomberg, J.; Beeler, N.; Blanpied, M.</p> <p>2000-01-01</p> <p>We examine the predictions of Coulomb failure stress and <span class="hlt">rate</span>-state frictional <span class="hlt">models</span>. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant <span class="hlt">rate</span> (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity <span class="hlt">rate</span> r(t)/r0, testable using <span class="hlt">earthquake</span> catalogs, where r0 is the constant <span class="hlt">rate</span> resulting from tectonic stressing. <span class="hlt">Models</span> of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay <span class="hlt">rate</span>, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background <span class="hlt">rate</span>. A Coulomb <span class="hlt">model</span> requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity <span class="hlt">rate</span> increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain <span class="hlt">rate</span>-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and <span class="hlt">rate</span>-state <span class="hlt">models</span> theoretically. <span class="hlt">Rate</span>-state <span class="hlt">model</span> fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the <span class="hlt">rate</span>-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a <span class="hlt">rate</span>-state <span class="hlt">model</span> behaves like a modified</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.T22D..06G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.T22D..06G"><span>Distributing <span class="hlt">Earthquakes</span> Among California's Faults: A Binary Integer Programming Approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Geist, E. L.; Parsons, T.</p> <p>2016-12-01</p> <p>Statement of the problem is simple: given regional seismicity specified by a Gutenber-Richter (G-R) relation, how are <span class="hlt">earthquakes</span> distributed to match observed fault-slip <span class="hlt">rates</span>? The objective is to determine the magnitude-frequency relation on individual faults. The California statewide G-R b-value and a-value are estimated from historical seismicity, with the a-value accounting for off-fault seismicity. UCERF3 consensus slip <span class="hlt">rates</span> are used, based on geologic and geodetic data and include estimates of coupling coefficients. The binary integer programming (BIP) problem is set up such that each <span class="hlt">earthquake</span> from a synthetic catalog spanning millennia can occur at any location along any fault. The decision vector, therefore, consists of binary variables, with values equal to one indicating the location of each <span class="hlt">earthquake</span> that results in an optimal match of slip <span class="hlt">rates</span>, in an L1-norm sense. Rupture area and slip associated with each <span class="hlt">earthquake</span> are determined from a magnitude-area scaling relation. Uncertainty bounds on the UCERF3 slip <span class="hlt">rates</span> provide explicit minimum and maximum constraints to the BIP <span class="hlt">model</span>, with the former more important to feasibility of the problem. There is a maximum magnitude limit associated with each fault, based on fault length, providing an implicit constraint. Solution of integer programming problems with a large number of variables (>105 in this study) has been possible only since the late 1990s. In addition to the classic branch-and-bound technique used for these problems, several other algorithms have been recently developed, including pre-solving, sifting, cutting planes, heuristics, and parallelization. An optimal solution is obtained using a state-of-the-art BIP solver for M≥6 <span class="hlt">earthquakes</span> and California's faults with slip-<span class="hlt">rates</span> > 1 mm/yr. Preliminary results indicate a surprising diversity of on-fault magnitude-frequency relations throughout the state.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70023020','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70023020"><span>GPS constraints on M 7-8 <span class="hlt">earthquake</span> recurrence times for the New Madrid seismic zone</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Stuart, W.D.</p> <p>2001-01-01</p> <p>Newman et al. (1999) estimate the time interval between the 1811-1812 <span class="hlt">earthquake</span> sequence near New Madrid, Missouri and a future similar sequence to be at least 2,500 years, an interval significantly longer than other recently published estimates. To calculate the recurrence time, they assume that slip on a vertical half-plane at depth contributes to the current interseismic motion of GPS benchmarks. Compared to other plausible fault <span class="hlt">models</span>, the half-plane <span class="hlt">model</span> gives nearly the maximum <span class="hlt">rate</span> of ground motion for the same interseismic slip <span class="hlt">rate</span>. Alternative <span class="hlt">models</span> with smaller interseismic fault slip area can satisfy the present GPS data by having higher slip <span class="hlt">rate</span> and thus can have <span class="hlt">earthquake</span> recurrence times much less than 2,500 years.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMMR42A..01B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMMR42A..01B"><span>Constraining friction, dilatancy and effective stress with <span class="hlt">earthquake</span> <span class="hlt">rates</span> in the deep crust</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Beeler, N. M.; Thomas, A.; Burgmann, R.; Shelly, D. R.</p> <p>2015-12-01</p> <p>Similar to their behavior on the deep extent of some subduction zones, families of recurring low-frequency <span class="hlt">earthquakes</span> (LFE) within zones of non-volcanic tremor on the San Andreas fault in central California show strong sensitivity to stresses induced by the tides. Taking all of the LFE families collectively, LFEs occur at all levels of the daily tidal stress, and are in phase with the very small, ~200 Pa, shear stress amplitudes while being uncorrelated with the ~2 kPa tidal normal stresses. Following previous work we assume LFE sources are small, persistent regions that repeatedly fail during shear within a much larger scale, otherwise aseismically creeping fault zone and that the correlation of LFE occurrence reflects modulation of the fault creep <span class="hlt">rate</span> by the tidal stresses. We examine the predictions of laboratory-observed <span class="hlt">rate</span>-dependent dilatancy associated with frictional slip. The effect of dilatancy hardening is to damp the slip <span class="hlt">rate</span>, so high dilatancy under undrained pore pressure reduces modulation of slip <span class="hlt">rate</span> by the tides. The undrained end-member <span class="hlt">model</span> produces: 1) no sensitivity to the tidal normal stress, as first suggested in this context by Hawthorne and Rubin [2010], and 2) fault creep <span class="hlt">rate</span> in phase with the tidal shear stress. Room temperature laboratory-observed values of the dilatancy and friction coefficients for talc, an extremely weak and weakly dilatant material, under-predict the observed San Andreas modulation at least by an order of magnitude owing to too much dilatancy. This may reflect a temperature dependence of the dilatancy and friction coefficients, both of which are expected to be zero at the brittle-ductile transition. The observed tidal modulation constrains the product of the friction and dilatancy coefficients to be at most 5 x 10-7 in the LFE source region, an order of magnitude smaller than observed at room temperature for talc. Alternatively, considering the predictions of a purely <span class="hlt">rate</span>-dependent talc friction would</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.T23E2634Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.T23E2634Y"><span>Weak ductile shear zone beneath the western North Anatolian Fault Zone: inferences from <span class="hlt">earthquake</span> cycle <span class="hlt">model</span> constrained by geodetic observations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yamasaki, T.; Wright, T. J.; Houseman, G. A.</p> <p>2013-12-01</p> <p> in the weak zone of ~ 1018×0.3 Pa s, and larger than ~ 1020 Pa s outside this region. <span class="hlt">Models</span> with sharp boundaries to the weak zone fit the data better than those with a smooth increase of viscosity away from the fault. Thus abrupt changes in material properties, such as those that might result from grain-size reduction, may be required in addition to any effect from shear heating. Unlike some previous <span class="hlt">models</span>, we do not require non-linear stress-dependent viscosities. Our <span class="hlt">models</span> imply that geodetic strain <span class="hlt">rates</span> decay to a quasi-steady state within about 10% of the inter-<span class="hlt">earthquake</span> period (years or decades) and that interseismic geodetic observations can therefore be used to infer the long-term geological slip <span class="hlt">rate</span>, provided there has not been a recent <span class="hlt">earthquake</span>. Rheologies inferred from postseismic studies alone likely reflect the rheology of the weak zone beneath the fault, and should not be used to infer the strength profile of normal lithosphere.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.S23A2767W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.S23A2767W"><span><span class="hlt">Earthquake</span> Loss Scenarios: Warnings about the Extent of Disasters</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wyss, M.; Tolis, S.; Rosset, P.</p> <p>2016-12-01</p> <p>It is imperative that losses expected due to future <span class="hlt">earthquakes</span> be estimated. Officials and the public need to be aware of what disaster is likely in store for them in order to reduce the fatalities and efficiently help the injured. Scenarios for <span class="hlt">earthquake</span> parameters can be constructed to a reasonable accuracy in highly active <span class="hlt">earthquake</span> belts, based on knowledge of seismotectonics and history. Because of the inherent uncertainties of loss estimates however, it would be desirable that more than one group calculate an estimate for the same area. By discussing these estimates, one may find a consensus of the range of the potential disasters and persuade officials and residents of the reality of the <span class="hlt">earthquake</span> threat. To <span class="hlt">model</span> a scenario and estimate <span class="hlt">earthquake</span> losses requires data sets that are sufficiently accurate of the number of people present, the built environment, and if possible the transmission of seismic waves. As examples we use loss estimates for possible repeats of historic <span class="hlt">earthquakes</span> in Greece that occurred between -464 and 700. We <span class="hlt">model</span> future large Greek <span class="hlt">earthquakes</span> as having M6.8 and rupture lengths of 60 km. In four locations where historic <span class="hlt">earthquakes</span> with serious losses have occurred, we estimate that 1,000 to 1,500 people might perish, with an additional factor of four people injured. Defining the area of influence of these <span class="hlt">earthquakes</span> as that with shaking intensities larger and equal to V, we estimate that 1.0 to 2.2 million people in about 2,000 settlements may be affected. We calibrate the QLARM tool for calculating intensities and losses in Greece, using the M6, 1999 Athens <span class="hlt">earthquake</span> and matching the isoseismal information for six <span class="hlt">earthquakes</span>, which occurred in Greece during the last 140 years. Comparing fatality numbers that would occur theoretically today with the numbers reported, and correcting for the increase in population, we estimate that the improvement of the building stock has reduced the mortality and injury <span class="hlt">rate</span> in Greek</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JGRB..121.3586B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JGRB..121.3586B"><span>Bayesian probabilities for Mw 9.0+ <span class="hlt">earthquakes</span> in the Aleutian Islands from a regionally scaled global <span class="hlt">rate</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Butler, Rhett; Frazer, L. Neil; Templeton, William J.</p> <p>2016-05-01</p> <p>We use the global <span class="hlt">rate</span> of Mw ≥ 9.0 <span class="hlt">earthquakes</span>, and standard Bayesian procedures, to estimate the probability of such mega events in the Aleutian Islands, where they pose a significant risk to Hawaii. We find that the probability of such an <span class="hlt">earthquake</span> along the Aleutians island arc is 6.5% to 12% over the next 50 years (50% credibility interval) and that the annualized risk to Hawai'i is about $30 M. Our method (the regionally scaled global <span class="hlt">rate</span> method or RSGR) is to scale the global <span class="hlt">rate</span> of Mw 9.0+ events in proportion to the fraction of global subduction (units of area per year) that takes place in the Aleutians. The RSGR method assumes that Mw 9.0+ events are a Poisson process with a <span class="hlt">rate</span> that is both globally and regionally stationary on the time scale of centuries, and it follows the principle of Burbidge et al. (2008) who used the product of fault length and convergence <span class="hlt">rate</span>, i.e., the area being subducted per annum, to scale the Poisson <span class="hlt">rate</span> for the GSS to sections of the Indonesian subduction zone. Before applying RSGR to the Aleutians, we first apply it to five other regions of the global subduction system where its <span class="hlt">rate</span> predictions can be compared with those from paleotsunami, paleoseismic, and geoarcheology data. To obtain regional <span class="hlt">rates</span> from paleodata, we give a closed-form solution for the probability density function of the Poisson <span class="hlt">rate</span> when event count and observation time are both uncertain.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70032649','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70032649"><span>Updated determination of stress parameters for nine well-recorded <span class="hlt">earthquakes</span> in eastern North America</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Boore, David M.</p> <p>2012-01-01</p> <p>Stress parameters (Δσ) are determined for nine relatively well-recorded <span class="hlt">earthquakes</span> in eastern North America for ten attenuation <span class="hlt">models</span>. This is an update of a previous study by Boore et al. (2010). New to this paper are observations from the 2010 Val des Bois <span class="hlt">earthquake</span>, additional observations for the 1988 Saguenay and 2005 Riviere du Loup <span class="hlt">earthquakes</span>, and consideration of six attenuation <span class="hlt">models</span> in addition to the four used in the previous study. As in that study, it is clear that Δσ depends strongly on the <span class="hlt">rate</span> of geometrical spreading (as well as other <span class="hlt">model</span> parameters). The observations necessary to determine conclusively which attenuation <span class="hlt">model</span> best fits the data are still lacking. At this time, a simple 1/R <span class="hlt">model</span> seems to give as good an overall fit to the data as more complex <span class="hlt">models</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27418504','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27418504"><span>Connecting slow <span class="hlt">earthquakes</span> to huge <span class="hlt">earthquakes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Obara, Kazushige; Kato, Aitaro</p> <p>2016-07-15</p> <p>Slow <span class="hlt">earthquakes</span> are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional <span class="hlt">earthquakes</span>. However, slow <span class="hlt">earthquakes</span> and huge megathrust <span class="hlt">earthquakes</span> can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow <span class="hlt">earthquakes</span> may help to reveal the physics underlying megathrust events as useful analogs. Slow <span class="hlt">earthquakes</span> may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge <span class="hlt">earthquakes</span> if the adjacent locked region is critically loaded. Careful and precise monitoring of slow <span class="hlt">earthquakes</span> may provide new information on the likelihood of impending huge <span class="hlt">earthquakes</span>. Copyright © 2016, American Association for the Advancement of Science.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70178131','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70178131"><span>Increasing seismicity in the U. S. midcontinent: Implications for <span class="hlt">earthquake</span> hazard</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Ellsworth, William L.; Llenos, Andrea L.; McGarr, Arthur F.; Michael, Andrew J.; Rubinstein, Justin L.; Mueller, Charles S.; Petersen, Mark D.; Calais, Eric</p> <p>2015-01-01</p> <p><span class="hlt">Earthquake</span> activity in parts of the central United States has increased dramatically in recent years. The space-time distribution of the increased seismicity, as well as numerous published case studies, indicates that the increase is of anthropogenic origin, principally driven by injection of wastewater coproduced with oil and gas from tight formations. Enhanced oil recovery and long-term production also contribute to seismicity at a few locations. Preliminary hazard <span class="hlt">models</span> indicate that areas experiencing the highest <span class="hlt">rate</span> of <span class="hlt">earthquakes</span> in 2014 have a short-term (one-year) hazard comparable to or higher than the hazard in the source region of tectonic <span class="hlt">earthquakes</span> in the New Madrid and Charleston seismic zones.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.7801S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.7801S"><span>Protracted fluvial recovery from medieval <span class="hlt">earthquakes</span>, Pokhara, Nepal</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stolle, Amelie; Bernhardt, Anne; Schwanghart, Wolfgang; Andermann, Christoff; Schönfeldt, Elisabeth; Seidemann, Jan; Adhikari, Basanta R.; Merchel, Silke; Rugel, Georg; Fort, Monique; Korup, Oliver</p> <p>2016-04-01</p> <p>River response to strong <span class="hlt">earthquake</span> shaking in mountainous terrain often entails the flushing of sediments delivered by widespread co-seismic landsliding. Detailed mass-balance studies following major <span class="hlt">earthquakes</span> in China, Taiwan, and New Zealand suggest fluvial recovery times ranging from several years to decades. We report a detailed chronology of <span class="hlt">earthquake</span>-induced valley fills in the Pokhara region of western-central Nepal, and demonstrate that rivers continue to adjust to several large medieval <span class="hlt">earthquakes</span> to the present day, thus challenging the notion of transient fluvial response to seismic disturbance. The Pokhara valley features one of the largest and most extensively dated sedimentary records of <span class="hlt">earthquake</span>-triggered sedimentation in the Himalayas, and independently augments paleo-seismological archives obtained mainly from fault trenches and historic documents. New radiocarbon dates from the catastrophically deposited Pokhara Formation document multiple phases of extremely high geomorphic activity between ˜700 and ˜1700 AD, preserved in thick sequences of alternating fluvial conglomerates, massive mud and silt beds, and cohesive debris-flow deposits. These dated fan-marginal slackwater sediments indicate pronounced sediment pulses in the wake of at least three large medieval <span class="hlt">earthquakes</span> in ˜1100, 1255, and 1344 AD. We combine these dates with digital elevation <span class="hlt">models</span>, geological maps, differential GPS data, and sediment logs to estimate the extent of these three pulses that are characterized by sedimentation <span class="hlt">rates</span> of ˜200 mm yr-1 and peak <span class="hlt">rates</span> as high as 1,000 mm yr-1. Some 5.5 to 9 km3 of material infilled the pre-existing topography, and is now prone to ongoing fluvial dissection along major canyons. Contemporary river incision into the Pokhara Formation is rapid (120-170 mm yr-1), triggering widespread bank erosion, channel changes, and very high sediment yields of the order of 103 to 105 t km-2 yr-1, that by far outweigh bedrock denudation <span class="hlt">rates</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005AGUFM.S41D..08K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005AGUFM.S41D..08K"><span>Regional <span class="hlt">Earthquake</span> Likelihood <span class="hlt">Models</span>: A realm on shaky grounds?</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kossobokov, V.</p> <p>2005-12-01</p> <p>Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" <span class="hlt">earthquake</span> forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability <span class="hlt">models</span> are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability <span class="hlt">models</span> on the empirical data. It is not an easy task in the frames of the Regional <span class="hlt">Earthquake</span> Likelihood <span class="hlt">Models</span> (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering <span class="hlt">model</span>" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5738038','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5738038"><span>Children's emotional experience two years after an <span class="hlt">earthquake</span>: An exploration of knowledge of <span class="hlt">earthquakes</span> and associated emotions</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Burro, Roberto; Hall, Rob</p> <p>2017-01-01</p> <p>A major <span class="hlt">earthquake</span> has a potentially highly traumatic impact on children’s psychological functioning. However, while many studies on children describe negative consequences in terms of mental health and psychiatric disorders, little is known regarding how the developmental processes of emotions can be affected following exposure to disasters. Objectives We explored whether and how the exposure to a natural disaster such as the 2012 Emilia Romagna <span class="hlt">earthquake</span> affected the development of children’s emotional competence in terms of understanding, regulating, and expressing emotions, after two years, when compared with a control group not exposed to the <span class="hlt">earthquake</span>. We also examined the role of class level and gender. Method The sample included two groups of children (n = 127) attending primary school: The experimental group (n = 65) experienced the 2012 Emilia Romagna <span class="hlt">earthquake</span>, while the control group (n = 62) did not. The data collection took place two years after the <span class="hlt">earthquake</span>, when children were seven or ten-year-olds. Beyond assessing the children’s understanding of emotions and regulating abilities with standardized instruments, we employed semi-structured interviews to explore their knowledge of <span class="hlt">earthquakes</span> and associated emotions, and a structured task on the intensity of some target emotions. Results We applied Generalized Linear Mixed <span class="hlt">Models</span>. Exposure to the <span class="hlt">earthquake</span> did not influence the understanding and regulation of emotions. The understanding of emotions varied according to class level and gender. Knowledge of <span class="hlt">earthquakes</span>, emotional language, and emotions associated with <span class="hlt">earthquakes</span> were, respectively, more complex, frequent, and intense for children who had experienced the <span class="hlt">earthquake</span>, and at increasing ages. Conclusions Our data extend the generalizability of theoretical <span class="hlt">models</span> on children’s psychological functioning following disasters, such as the dose-response <span class="hlt">model</span> and the organizational-developmental <span class="hlt">model</span> for child resilience, and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70189528','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70189528"><span>Seismic‐hazard forecast for 2016 including induced and natural <span class="hlt">earthquakes</span> in the central and eastern United States</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Petersen, Mark D.; Mueller, Charles; Moschetti, Morgan P.; Hoover, Susan M.; Llenos, Andrea L.; Ellsworth, William L.; Michael, Andrew J.; Rubinstein, Justin L.; McGarr, Arthur F.; Rukstales, Kenneth S.</p> <p>2016-01-01</p> <p>The U.S. Geological Survey (USGS) has produced a one‐year (2016) probabilistic seismic‐hazard assessment for the central and eastern United States (CEUS) that includes contributions from both induced and natural <span class="hlt">earthquakes</span> that are constructed with probabilistic methods using alternative data and inputs. This hazard assessment builds on our 2016 final <span class="hlt">model</span> (Petersen et al., 2016) by adding sensitivity studies, illustrating hazard in new ways, incorporating new population data, and discussing potential improvements. The <span class="hlt">model</span> considers short‐term seismic activity <span class="hlt">rates</span> (primarily 2014–2015) and assumes that the activity <span class="hlt">rates</span> will remain stationary over short time intervals. The final <span class="hlt">model</span> considers different ways of categorizing induced and natural <span class="hlt">earthquakes</span> by incorporating two equally weighted <span class="hlt">earthquake</span> <span class="hlt">rate</span> submodels that are composed of alternative <span class="hlt">earthquake</span> inputs for catalog duration, smoothing parameters, maximum magnitudes, and ground‐motion <span class="hlt">models</span>. These alternatives represent uncertainties on how we calculate <span class="hlt">earthquake</span> occurrence and the diversity of opinion within the science community. In this article, we also test sensitivity to the minimum moment magnitude between M 4 and M 4.7 and the choice of applying a declustered catalog with b=1.0 rather than the full catalog with b=1.3. We incorporate two <span class="hlt">earthquake</span> <span class="hlt">rate</span> submodels: in the informed submodel we classify <span class="hlt">earthquakes</span> as induced or natural, and in the adaptive submodel we do not differentiate. The alternative submodel hazard maps both depict high hazard and these are combined in the final <span class="hlt">model</span>. Results depict several ground‐shaking measures as well as intensity and include maps showing a high‐hazard level (1% probability of exceedance in 1 year or greater). Ground motions reach 0.6g horizontal peak ground acceleration (PGA) in north‐central Oklahoma and southern Kansas, and about 0.2g PGA in the Raton basin of Colorado and New Mexico, in central Arkansas, and in</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70033539','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70033539"><span>Exponential decline of aftershocks of the M7.9 1868 great Kau <span class="hlt">earthquake</span>, Hawaii, through the 20th century</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Klein, F.W.; Wright, Tim</p> <p>2008-01-01</p> <p>The remarkable catalog of Hawaiian <span class="hlt">earthquakes</span> going back to the 1820s is based on missionary diaries, newspaper accounts, and instrumental records and spans the great M7.9 Kau <span class="hlt">earthquake</span> of April 1868 and its aftershock sequence. The <span class="hlt">earthquake</span> record since 1868 defines a smooth curve complete to M5.2 of the declining <span class="hlt">rate</span> into the 21st century, after five short volcanic swarms are removed. A single aftershock curve fits the <span class="hlt">earthquake</span> record, even with numerous M6 and 7 main shocks and eruptions. The timing of some moderate <span class="hlt">earthquakes</span> may be controlled by magmatic stresses, but their overall long-term <span class="hlt">rate</span> reflects one of aftershocks of the Kau <span class="hlt">earthquake</span>. The 1868 <span class="hlt">earthquake</span> is, therefore, the largest and most controlling stress event in the 19th and 20th centuries. We fit both the modified Omori (power law) and stretched exponential (SE) functions to the <span class="hlt">earthquakes</span>. We found that the modified Omori law is a good fit to the M ??? 5.2 <span class="hlt">earthquake</span> <span class="hlt">rate</span> for the first 10 years or so and the more rapidly declining SE function fits better thereafter, as supported by three statistical tests. The switch to exponential decay suggests that a possible change in aftershock physics may occur from <span class="hlt">rate</span> and state fault friction, with no change in the stress <span class="hlt">rate</span>, to viscoelastic stress relaxation. The 61-year exponential decay constant is at the upper end of the range of geodetic relaxation times seen after other global <span class="hlt">earthquakes</span>. <span class="hlt">Modeling</span> deformation in Hawaii is beyond the scope of this paper, but a simple interpretation of the decay suggests an effective viscosity of 1019 to 1020 Pa s pertains in the volcanic spreading of Hawaii's flanks. The rapid decline in <span class="hlt">earthquake</span> <span class="hlt">rate</span> poses questions for seismic hazard estimates in an area that is cited as one of the most hazardous in the United States.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GeoJI.197..620K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GeoJI.197..620K"><span>Statistical <span class="hlt">earthquake</span> focal mechanism forecasts</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kagan, Yan Y.; Jackson, David D.</p> <p>2014-04-01</p> <p>Forecasts of the focal mechanisms of future shallow (depth 0-70 km) <span class="hlt">earthquakes</span> are important for seismic hazard estimates and Coulomb stress, and other <span class="hlt">models</span> of <span class="hlt">earthquake</span> occurrence. Here we report on a high-resolution global forecast of <span class="hlt">earthquake</span> <span class="hlt">rate</span> density as a function of location, magnitude and focal mechanism. In previous publications we reported forecasts of 0.5° spatial resolution, covering the latitude range from -75° to +75°, based on the Global Central Moment Tensor <span class="hlt">earthquake</span> catalogue. In the new forecasts we have improved the spatial resolution to 0.1° and the latitude range from pole to pole. Our focal mechanism estimates require distance-weighted combinations of observed focal mechanisms within 1000 km of each gridpoint. Simultaneously, we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms, using the method of Kagan & Jackson proposed in 1994. This average angle reveals the level of tectonic complexity of a region and indicates the accuracy of the prediction. The procedure becomes problematical where longitude lines are not approximately parallel, and where shallow <span class="hlt">earthquakes</span> are so sparse that an adequate sample spans very large distances. North or south of 75°, the azimuths of points 1000 km away may vary by about 35°. We solved this problem by calculating focal mechanisms on a plane tangent to the Earth's surface at each forecast point, correcting for the rotation of the longitude lines at the locations of <span class="hlt">earthquakes</span> included in the averaging. The corrections are negligible between -30° and +30° latitude, but outside that band uncorrected rotations can be significantly off. Improved forecasts at 0.5° and 0.1° resolution are posted at http://eq.ess.ucla.edu/kagan/glob_gcmt_index.html.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AGUFMNG54A..07J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AGUFMNG54A..07J"><span><span class="hlt">Earthquake</span> Prediction in Large-scale Faulting Experiments</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.</p> <p>2004-12-01</p> <p> nucleation in these experiments is consistent with observations and theory of Dieterich and Kilgore (1996). Precursory strains can be detected typically after 50% of the total loading time. The Dieterich and Kilgore approach implies an alternative method of <span class="hlt">earthquake</span> prediction based on comparing real-time strain monitoring with previous precursory strain records or with physically-based <span class="hlt">models</span> of accelerating slip. Near failure, time to failure t is approximately inversely proportional to precursory slip <span class="hlt">rate</span> V. Based on a least squares fit to accelerating slip velocity from ten or more events, the standard deviation of the residual between predicted and observed log t is typically 0.14. Scaling these results to natural recurrence suggests that a year prior to an <span class="hlt">earthquake</span>, failure time can be predicted from measured fault slip <span class="hlt">rate</span> with a typical error of 140 days, and a day prior to the <span class="hlt">earthquake</span> with a typical error of 9 hours. However, such predictions require detecting aseismic nucleating strains, which have not yet been found in the field, and on distinguishing <span class="hlt">earthquake</span> precursors from other strain transients. There is some field evidence of precursory seismic strain for large <span class="hlt">earthquakes</span> (Bufe and Varnes, 1993) which may be related to our observations. In instances where precursory activity is spatially variable during the interseismic period, as in our experiments, distinguishing precursory activity might be best accomplished with deep arrays of near fault instruments and pattern recognition algorithms such as principle component analysis (Rundle et al., 2000).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.S44B..08S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.S44B..08S"><span>Are <span class="hlt">Earthquake</span> Clusters/Supercycles Real or Random?</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.</p> <p>2016-12-01</p> <p>Long records of <span class="hlt">earthquakes</span> at plate boundaries such as the San Andreas or Cascadia often show that large <span class="hlt">earthquakes</span> occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, <span class="hlt">earthquakes</span> resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future <span class="hlt">earthquake</span> is equally likely immediately after the past one and much later, so <span class="hlt">earthquakes</span> can cluster in time. We analyze the agreement between such a <span class="hlt">model</span> and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable <span class="hlt">model</span> in which <span class="hlt">earthquake</span> probability increases with time between <span class="hlt">earthquakes</span> and decreases after an <span class="hlt">earthquake</span>. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an <span class="hlt">earthquake</span> can remain higher than the long-term average for several cycles. Thus the probability of another <span class="hlt">earthquake</span> is path dependent, i.e. depends on the prior <span class="hlt">earthquake</span> history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of <span class="hlt">earthquakes</span> result from both the <span class="hlt">model</span> parameters and chance, so two runs with the same parameters look different. The <span class="hlt">model</span> parameters control the average time between events and the variation of the actual times around this average, so</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70010297','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70010297"><span>Prospects for <span class="hlt">earthquake</span> prediction and control</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Healy, J.H.; Lee, W.H.K.; Pakiser, L.C.; Raleigh, C.B.; Wood, M.D.</p> <p>1972-01-01</p> <p>The San Andreas fault is viewed, according to the concepts of seafloor spreading and plate tectonics, as a transform fault that separates the Pacific and North American plates and along which relative movements of 2 to 6 cm/year have been taking place. The resulting strain can be released by creep, by <span class="hlt">earthquakes</span> of moderate size, or (as near San Francisco and Los Angeles) by great <span class="hlt">earthquakes</span>. Microearthquakes, as mapped by a dense seismograph network in central California, generally coincide with zones of the San Andreas fault system that are creeping. Microearthquakes are few and scattered in zones where elastic energy is being stored. Changes in the <span class="hlt">rate</span> of strain, as recorded by tiltmeter arrays, have been observed before several <span class="hlt">earthquakes</span> of about magnitude 4. Changes in fluid pressure may control timing of seismic activity and make it possible to control natural <span class="hlt">earthquakes</span> by controlling variations in fluid pressure in fault zones. An experiment in <span class="hlt">earthquake</span> control is underway at the Rangely oil field in Colorado, where the <span class="hlt">rates</span> of fluid injection and withdrawal in experimental wells are being controlled. ?? 1972.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17794569','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17794569"><span>The 1985 central chile <span class="hlt">earthquake</span>: a repeat of previous great <span class="hlt">earthquakes</span> in the region?</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G</p> <p>1986-07-25</p> <p>A great <span class="hlt">earthquake</span> (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the <span class="hlt">earthquake</span>. The aftershocks of the 1985 <span class="hlt">earthquake</span> define a rupture area of 170 by 110 square kilometers. The <span class="hlt">earthquake</span> was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great <span class="hlt">earthquakes</span> in this region. An analysis of previous <span class="hlt">earthquakes</span> suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable <span class="hlt">models</span> of <span class="hlt">earthquake</span> recurrence. The great <span class="hlt">earthquakes</span> in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great <span class="hlt">earthquake</span> in the next few tens of years.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1919256P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1919256P"><span>From Geodesy to Tectonics: Observing <span class="hlt">Earthquake</span> Processes from Space (Augustus Love Medal Lecture)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Parsons, Barry</p> <p>2017-04-01</p> <p>A suite of powerful satellite-based techniques has been developed over the past two decades allowing us to measure and interpret variations in the deformation around active continental faults occurring in <span class="hlt">earthquakes</span>, before the <span class="hlt">earthquakes</span> as strain accumulates, and immediately following them. The techniques include radar interferometry and the measurement of vertical and horizontal surface displacements using very high-resolution (VHR) satellite imagery. They provide near-field measurements of <span class="hlt">earthquake</span> deformation facilitating the association with the corresponding active faults and their topographic expression. The techniques also enable pre- and post-seismic deformation to be determined and hence allow the response of the fault and surrounding medium to changes in stress to be investigated. The talk illustrates both the techniques and the applications with examples from recent <span class="hlt">earthquakes</span>. These include the 2013 Balochistan <span class="hlt">earthquake</span>, a predominantly strike-slip event, that occurred on the arcuate Hoshab fault in the eastern Makran linking an area of mainly left-lateral shear in the east to one of shortening in the west. The difficulty of reconciling predominantly strike-slip motion with this shortening has led to a wide range of unconventional kinematic and dynamic <span class="hlt">models</span>. Using pre-and post-seismic VHR satellite imagery, we are able to determine a 3-dimensional deformation field for the <span class="hlt">earthquake</span>; Sentinel-1 interferometry shows an increase in the <span class="hlt">rate</span> of creep on a creeping section bounding the northern end of the rupture in response to the <span class="hlt">earthquake</span>. In addition, we will look at the 1978 Tabas <span class="hlt">earthquake</span> for which no measurements of deformation were possible at the time. By combining pre-seismic 'spy' satellite images with modern imagery, and pre-seismic aerial stereo images with post-seismic satellite stereo images, we can determine vertical and horizontal displacements from the <span class="hlt">earthquake</span> and subsequent post-seismic deformation. These observations</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMIN32A..06B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMIN32A..06B"><span>Seismogeodesy for rapid <span class="hlt">earthquake</span> and tsunami characterization</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bock, Y.</p> <p>2016-12-01</p> <p>Rapid estimation of <span class="hlt">earthquake</span> magnitude and fault mechanism is critical for <span class="hlt">earthquake</span> and tsunami warning systems. Traditionally, the monitoring of <span class="hlt">earthquakes</span> and tsunamis has been based on seismic networks for estimating <span class="hlt">earthquake</span> magnitude and slip, and tide gauges and deep-ocean buoys for direct measurement of tsunami waves. These methods are well developed for ocean basin-wide warnings but are not timely enough to protect vulnerable populations and infrastructure from the effects of local tsunamis, where waves may arrive within 15-30 minutes of <span class="hlt">earthquake</span> onset time. Direct measurements of displacements by GPS networks at subduction zones allow for rapid magnitude and slip estimation in the near-source region, that are not affected by instrumental limitations and magnitude saturation experienced by local seismic networks. However, GPS displacements by themselves are too noisy for strict <span class="hlt">earthquake</span> early warning (P-wave detection). Optimally combining high-<span class="hlt">rate</span> GPS and seismic data (in particular, accelerometers that do not clip), referred to as seismogeodesy, provides a broadband instrument that does not clip in the near field, is impervious to magnitude saturation, and provides accurate real-time static and dynamic displacements and velocities in real time. Here we describe a NASA-funded effort to integrate GPS and seismogeodetic observations as part of NOAA's Tsunami Warning Centers in Alaska and Hawaii. It consists of a series of plug-in modules that allow for a hierarchy of rapid seismogeodetic products, including automatic P-wave picking, hypocenter estimation, S-wave prediction, magnitude scaling relationships based on P-wave amplitude (Pd) and peak ground displacement (PGD), finite-source CMT solutions and fault slip <span class="hlt">models</span> as input for tsunami warnings and <span class="hlt">models</span>. For the NOAA/NASA project, the modules are being integrated into an existing USGS Earthworm environment, currently limited to traditional seismic data. We are focused on a network of</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18292339','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18292339"><span>Extending <span class="hlt">earthquakes</span>' reach through cascading.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Marsan, David; Lengliné, Olivier</p> <p>2008-02-22</p> <p><span class="hlt">Earthquakes</span>, whatever their size, can trigger other <span class="hlt">earthquakes</span>. Mainshocks cause aftershocks to occur, which in turn activate their own local aftershock sequences, resulting in a cascade of triggering that extends the reach of the initial mainshock. A long-lasting difficulty is to determine which <span class="hlt">earthquakes</span> are connected, either directly or indirectly. Here we show that this causal structure can be found probabilistically, with no a priori <span class="hlt">model</span> nor parameterization. Large regional <span class="hlt">earthquakes</span> are found to have a short direct influence in comparison to the overall aftershock sequence duration. Relative to these large mainshocks, small <span class="hlt">earthquakes</span> collectively have a greater effect on triggering. Hence, cascade triggering is a key component in <span class="hlt">earthquake</span> interactions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70028662','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70028662"><span>Coulomb stress transfer and tectonic loading preceding the 2002 Denali fault <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Bufe, Charles G.</p> <p>2006-01-01</p> <p>Pre-2002 tectonic loading and Coulomb stress transfer are <span class="hlt">modeled</span> along the rupture zone of the M 7.9 Denali fault <span class="hlt">earthquake</span> (DFE) and on adjacent segments of the right-lateral Denali–Totschunda fault system in central Alaska, using a three-dimensional boundary-element program. The segments <span class="hlt">modeled</span> closely follow, for about 95°, the arc of a circle of radius 375 km centered on an inferred asperity near the northeastern end of the intersection of the Patton Bay fault with the Alaskan megathrust under Prince William Sound. The loading <span class="hlt">model</span> includes slip of 6 mm/yr below 12 km along the fault system, consistent with rotation of the Wrangell block about the asperity at a <span class="hlt">rate</span> of about 1°/m.y. as well as slip of the Pacific plate at 5 cm/yr at depth along the Fairweather–Queen Charlotte transform fault system and on the Alaska megathrust. The <span class="hlt">model</span> is consistent with most available pre-2002 Global Positioning System (GPS) displacement <span class="hlt">rate</span> data. Coulomb stresses induced on the Denali–Totschunda fault system (locked above 12 km) by slip at depth and by transfer from the M 9.2 Prince William Sound <span class="hlt">earthquake</span> of 1964 dominated the changing Coulomb stress distribution along the fault. The combination of loading (∼70–85%) and coseismic stress transfer from the great 1964 <span class="hlt">earthquake</span> (∼15–30%) were the principal post-1900 stress factors building toward strike-slip failure of the northern Denali and Totschunda segments in the M 7.9 <span class="hlt">earthquake</span> of November 2002. Postseismic stresses transferred from the 1964 <span class="hlt">earthquake</span> may also have been a significant factor. The M 7.2–7.4 Delta River <span class="hlt">earthquake</span> of 1912 (Carver et al., 2004) may have delayed or advanced the timing of the DFE, depending on the details and location of its rupture. The initial subevent of the 2002 DFE <span class="hlt">earthquake</span> was on the 40-km Susitna Glacier thrust fault at the western end of the Denali fault rupture. The Coulomb stress transferred from the 1964 <span class="hlt">earthquake</span> moved the Susitna Glacier thrust</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AGUFM.S31C..05K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AGUFM.S31C..05K"><span>Finite-Source Inversion for the 2004 Parkfield <span class="hlt">Earthquake</span> using 3D Velocity <span class="hlt">Model</span> Green's Functions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, A.; Dreger, D.; Larsen, S.</p> <p>2008-12-01</p> <p>We determine finite fault <span class="hlt">models</span> of the 2004 Parkfield <span class="hlt">earthquake</span> using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure <span class="hlt">model</span> in this region, this <span class="hlt">earthquake</span> provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield <span class="hlt">earthquakes</span>. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta <span class="hlt">earthquake</span> using the same source paramerization and data but different Green's functions and found that the <span class="hlt">models</span> were quite different. This indicates that the choice of the velocity <span class="hlt">model</span> significantly affects the waveform <span class="hlt">modeling</span> at near-fault stations. In this study, we used the P-wave velocity <span class="hlt">model</span> developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we <span class="hlt">modeled</span> the waveforms of small <span class="hlt">earthquakes</span> to validate the 3D velocity <span class="hlt">model</span> and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity <span class="hlt">model</span> predicted the individual phases well at frequencies lower than 0</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010GeoJI.183.1525M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010GeoJI.183.1525M"><span><span class="hlt">Earthquake</span> prediction analysis based on empirical seismic <span class="hlt">rate</span>: the M8 algorithm</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Molchan, G.; Romashkova, L.</p> <p>2010-12-01</p> <p>The quality of space-time <span class="hlt">earthquake</span> prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local <span class="hlt">rate</span> of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized <span class="hlt">rate</span> of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The <span class="hlt">model</span> of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70118276','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70118276"><span>Conditional spectrum computation incorporating multiple causal <span class="hlt">earthquakes</span> and ground-motion prediction <span class="hlt">models</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas</p> <p>2013-01-01</p> <p>The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all <span class="hlt">earthquake</span> scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction <span class="hlt">models</span> (GMPMs) and seismic source <span class="hlt">models</span>. Typical CS calculations to date are produced for a single <span class="hlt">earthquake</span> scenario using a single GMPM, but more precise use requires consideration of at least multiple causal <span class="hlt">earthquakes</span> and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal <span class="hlt">earthquake</span> and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSeis..22..303P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSeis..22..303P"><span>Temporal and spatial distributions of precursory seismicity <span class="hlt">rate</span> changes in the Thailand-Laos-Myanmar border region: implication for upcoming hazardous <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Puangjaktha, Prayot; Pailoplee, Santi</p> <p>2018-01-01</p> <p>To study the prospective areas of upcoming strong-to-major <span class="hlt">earthquakes</span>, i.e., M w ≥ 6.0, a catalog of seismicity in the vicinity of the Thailand-Laos-Myanmar border region was generated and then investigated statistically. Based on the successful investigations of previous works, the seismicity <span class="hlt">rate</span> change (Z value) technique was applied in this study. According to the completeness <span class="hlt">earthquake</span> dataset, eight available case studies of strong-to-major <span class="hlt">earthquakes</span> were investigated retrospectively. After iterative tests of the characteristic parameters concerning the number of <span class="hlt">earthquakes</span> ( N) and time window ( T w ), the values of 50 and 1.2 years, respectively, were found to reveal an anomalous high Z-value peak (seismic quiescence) prior to the occurrence of six out of the eight major <span class="hlt">earthquake</span> events studied. In addition, the location of the Z-value anomalies conformed fairly well to the epicenters of those <span class="hlt">earthquakes</span>. Based on the investigation of correlation coefficient and the stochastic test of the Z values, the parameters used here ( N = 50 events and T w = 1.2 years) were suitable to determine the precursory Z value and not random phenomena. The Z values of this study and the frequency-magnitude distribution b values of a previous work both highlighted the same prospective areas that might generate an upcoming major <span class="hlt">earthquake</span>: (i) some areas in the northern part of Laos and (ii) the eastern part of Myanmar.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.T31E2567K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.T31E2567K"><span>Reassessment of 50 years of seismicity in Simav-Gediz grabens (Western Turkey), based on <span class="hlt">earthquake</span> relocations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Karasozen, E.; Nissen, E.; Bergman, E. A.; Walters, R. J.</p> <p>2013-12-01</p> <p>Western Turkey is a rapidly deforming region with a long history of high-magnitude normal faulting <span class="hlt">earthquakes</span>. However, the locations and slip <span class="hlt">rates</span> of the responsible faults are poorly constrained. Here, we reassess a series of large instrumental <span class="hlt">earthquakes</span> in the Simav-Gediz region, an area exhibiting a strong E-W gradient in N-S extension <span class="hlt">rates</span>, from low <span class="hlt">rates</span> bordering the Anatolian Plateau to much higher <span class="hlt">rates</span> in the west. We start with investigating a recent Mw 5.9 <span class="hlt">earthquake</span> at Simav (19 May 2011) using InSAR, teleseismic body-wave <span class="hlt">modeling</span> and field observations. Next, we exploit the small but clear InSAR signal to relocate a series of older, larger <span class="hlt">earthquakes</span>, using a calibrated <span class="hlt">earthquake</span> relocation method which is based on the hypocentroidial decomposition (HDC) method for multiple event relocation. These improved locations in turn provide an opportunity to reassess the regional style of deformation. One interesting aspect of these <span class="hlt">earthquakes</span> is that the largest (the Mw 7.2 Gediz <span class="hlt">earthquake</span>, March 1970) occurred in an area of slow extension and indistinct surface faulting, whilst the well-defined and more rapidly extending Simav graben has ruptured in several smaller, Mw 6 events. However, our relocations highlight the existence of a significant gap in instrumental <span class="hlt">earthquakes</span> along the central Simav graben, which, if it ruptured in a single event, could equal ~Mw 7. We were unable to identify fault scarps along this section due to dense vegetation and human modification, and we suggest that acquiring LiDAR data in this area should be a high priority in order to properly investigate <span class="hlt">earthquake</span> hazard in the Simav graben.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017RSPTA.37560003N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017RSPTA.37560003N"><span><span class="hlt">Earthquake</span> sequence simulations with measured properties for JFAST core samples</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro</p> <p>2017-08-01</p> <p>Since the 2011 Tohoku-Oki <span class="hlt">earthquake</span>, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical <span class="hlt">models</span> of <span class="hlt">earthquake</span> sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex <span class="hlt">rate</span> dependences: a and a-b values change with the slip <span class="hlt">rate</span>. In order to express such complexity, we generalize a <span class="hlt">rate</span>- and state-dependent friction law to a quadratic form in terms of the logarithmic slip <span class="hlt">rate</span>. The constraints from experiments reduced the degrees of freedom of the <span class="hlt">model</span> significantly, and we managed to find a plausible <span class="hlt">model</span> by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge <span class="hlt">model</span> parameter space. This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28827425','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28827425"><span><span class="hlt">Earthquake</span> sequence simulations with measured properties for JFAST core samples.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro</p> <p>2017-09-28</p> <p>Since the 2011 Tohoku-Oki <span class="hlt">earthquake</span>, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical <span class="hlt">models</span> of <span class="hlt">earthquake</span> sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex <span class="hlt">rate</span> dependences: a and a - b values change with the slip <span class="hlt">rate</span>. In order to express such complexity, we generalize a <span class="hlt">rate</span>- and state-dependent friction law to a quadratic form in terms of the logarithmic slip <span class="hlt">rate</span>. The constraints from experiments reduced the degrees of freedom of the <span class="hlt">model</span> significantly, and we managed to find a plausible <span class="hlt">model</span> by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge <span class="hlt">model</span> parameter space.This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'. © 2017 The Author(s).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1814895C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1814895C"><span>The use of the Finite Element method for the <span class="hlt">earthquakes</span> <span class="hlt">modelling</span> in different geodynamic environments</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Castaldo, Raffaele; Tizzani, Pietro</p> <p>2016-04-01</p> <p>Many numerical <span class="hlt">models</span> have been developed to simulate the deformation and stress changes associated to the faulting process. This aspect is an important topic in fracture mechanism. In the proposed study, we investigate the impact of the deep fault geometry and tectonic setting on the co-seismic ground deformation pattern associated to different <span class="hlt">earthquake</span> phenomena. We exploit the impact of the structural-geological data in Finite Element environment through an optimization procedure. In this framework, we <span class="hlt">model</span> the failure processes in a physical mechanical scenario to evaluate the kinematics associated to the Mw 6.1 L'Aquila 2009 <span class="hlt">earthquake</span> (Italy), the Mw 5.9 Ferrara and Mw 5.8 Mirandola 2012 <span class="hlt">earthquake</span> (Italy) and the Mw 8.3 Gorkha 2015 <span class="hlt">earthquake</span> (Nepal). These seismic events are representative of different tectonic scenario: the normal, the reverse and thrust faulting processes, respectively. In order to simulate the kinematic of the analyzed natural phenomena, we assume, under the plane stress approximation (is defined to be a state of stress in which the normal stress, sz, and the shear stress sxz and syz, directed perpendicular to x-y plane are assumed to be zero), the linear elastic behavior of the involved media. The performed finite element procedure consist of through two stages: (i) compacting under the weight of the rock successions (gravity loading), the deformation <span class="hlt">model</span> reaches a stable equilibrium; (ii) the co-seismic stage simulates, through a distributed slip along the active fault, the released stresses. To constrain the <span class="hlt">models</span> solution, we exploit the DInSAR deformation velocity maps retrieved by satellite data acquired by old and new generation sensors, as ENVISAT, RADARSAT-2 and SENTINEL 1A, encompassing the studied <span class="hlt">earthquakes</span>. More specifically, we first generate 2D several forward mechanical <span class="hlt">models</span>, then, we compare these with the recorded ground deformation fields, in order to select the best boundaries setting and parameters. Finally</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AcGeo..65..589F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AcGeo..65..589F"><span><span class="hlt">Earthquake</span> hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza</p> <p>2017-08-01</p> <p>Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the <span class="hlt">earthquake</span> potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, <span class="hlt">earthquake</span> pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based <span class="hlt">model</span>. The Molchan statistical procedure has been used to show that our forecasting <span class="hlt">model</span> is reliable. The <span class="hlt">earthquake</span> hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high <span class="hlt">earthquake</span> severity even though they are geographically far apart.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70034703','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70034703"><span>Testing <span class="hlt">earthquake</span> source inversion methodologies</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Page, M.; Mai, P.M.; Schorlemmer, D.</p> <p>2011-01-01</p> <p>Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays <span class="hlt">earthquake</span> source inversions are routinely performed after large <span class="hlt">earthquakes</span> and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting <span class="hlt">earthquake</span> source <span class="hlt">models</span> quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an <span class="hlt">earthquake</span> and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source <span class="hlt">models</span> are robust. Improved understanding of the uncertainty and reliability of <span class="hlt">earthquake</span> source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70192350','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70192350"><span><span class="hlt">Earthquake</span> source properties from instrumented laboratory stick-slip</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Kilgore, Brian D.; McGarr, Arthur F.; Beeler, Nicholas M.; Lockner, David A.; Thomas, Marion Y.; Mitchell, Thomas M.; Bhat, Harsha S.</p> <p>2017-01-01</p> <p>Stick-slip experiments were performed to determine the influence of the testing apparatus on source properties, develop methods to relate stick-slip to natural <span class="hlt">earthquakes</span> and examine the hypothesis of McGarr [2012] that the product of stiffness, k, and slip duration, Δt, is scale-independent and the same order as for <span class="hlt">earthquakes</span>. The experiments use the double-direct shear geometry, Sierra White granite at 2 MPa normal stress and a remote slip <span class="hlt">rate</span> of 0.2 µm/sec. To determine apparatus effects, disc springs were added to the loading column to vary k. Duration, slip, slip <span class="hlt">rate</span>, and stress drop decrease with increasing k, consistent with a spring-block slider <span class="hlt">model</span>. However, neither for the data nor <span class="hlt">model</span> is kΔt constant; this results from varying stiffness at fixed scale.In contrast, additional analysis of laboratory stick-slip studies from a range of standard testing apparatuses is consistent with McGarr's hypothesis. kΔt is scale-independent, similar to that of <span class="hlt">earthquakes</span>, equivalent to the ratio of static stress drop to average slip velocity, and similar to the ratio of shear modulus to wavespeed of rock. These properties result from conducting experiments over a range of sample sizes, using rock samples with the same elastic properties as the Earth, and scale-independent design practices.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990103108&hterms=sauber&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dsauber','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990103108&hterms=sauber&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dsauber"><span>Short-Term Uplift <span class="hlt">Rates</span> and the Mountain Building Process in Southern Alaska</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sauber, Jeanne; Herring, Thomas A.; Meigs, Andrew; Meigs, Andrew</p> <p>1998-01-01</p> <p>We have used GPS at 10 stations in southern Alaska with three epochs of measurements to estimate short-term uplift <span class="hlt">rates</span>. A number of great <span class="hlt">earthquakes</span> as well as recent large <span class="hlt">earthquakes</span> characterize the seismicity of the region this century. To reliably estimate uplift <span class="hlt">rates</span> from GPS data, numerical <span class="hlt">models</span> that included both the slip distribution in recent large <span class="hlt">earthquakes</span> and the general slab geometry were constructed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatGe...9..401M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatGe...9..401M"><span>Rise of the central Andean coast by <span class="hlt">earthquakes</span> straddling the Moho</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Melnick, Daniel</p> <p>2016-05-01</p> <p>Surface movements during the largest subduction zone <span class="hlt">earthquakes</span> commonly drown coastlines. Yet, on geological timescales, coastlines above subduction zones uplift. Here I use a morphometric analysis combined with a numerical <span class="hlt">model</span> of landscape evolution to estimate uplift <span class="hlt">rates</span> along the central Andean rasa--a low-relief coastal surface bounded by a steep cliff formed by wave erosion. I find that the rasa has experienced steady uplift of 0.13 +/- 0.04 mm per year along a stretch of more than 2,000 km in length, during the Quaternary. These long-term uplift <span class="hlt">rates</span> do not correlate with Global Positioning System (GPS) measurements of interseismic movements over the decadal scale, which implies that permanent uplift is not predominantly accumulated during the interseismic period. Instead, the <span class="hlt">rate</span> of rasa uplift correlates with slip during <span class="hlt">earthquakes</span> straddling the crust-mantle transition, the Moho. Such deeper <span class="hlt">earthquakes</span> with magnitude 7 to 8 that occurred between 1995 and 2012 resulted in decimetres of coastal uplift. Slip during these <span class="hlt">earthquakes</span> is located below the locked portion of the plate interface, and therefore may translate into permanent deformation of the overlying plate, where it causes uplift of the coastline. Thus, lower parts of the plate boundary are stably segmented over hundreds to millions of years. I suggest the coastline marks the surface expression of the transition between the shallow, locked seismogenic domain and the deeper, conditionally stable domain where modest <span class="hlt">earthquakes</span> build up topography.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.213.1647C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.213.1647C"><span>Towards the application of seismogeodesy in central Italy: a case study for the 2016 August 24 Mw 6.1 Italy <span class="hlt">earthquake</span> <span class="hlt">modelling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Kejie; Liu, Zhen; Liang, Cunren; Song, Y. Tony</p> <p>2018-06-01</p> <p>Dense strong motion and high-<span class="hlt">rate</span> Global Navigation Satellite Systems (GNSS) networks have been deployed in central Italy for rapid seismic source determination and corresponding hazard mitigation. Different from previous studies for the consistency between two kinds of sensor at collocated stations, here we focus on the combination of high-<span class="hlt">rate</span> GNSS displacement waveforms with collocated seismic strong motion accelerators, and investigate its application to image rupture history. Taking the 2016 August 24 Mw 6.1 Central Italy <span class="hlt">earthquake</span> as a case study, we first generate more accurate and longer period seismogeodetic displacement waveforms by a Kalman filter, then <span class="hlt">model</span> the rupture behaviour through a joint inversion including seismogeodetic waveforms and InSAR observations. Our results reveal that strong motion data alone can overestimate the magnitude and mismatch the GNSS observations, while 1 Hz sampling <span class="hlt">rate</span> GNSS is insufficient and the displacement is too noisy to depict rupture process. By contrast, seismogeodetic data enhances temporal resolution and maintains the static offsets that provide vital constraint to the reliable estimation of <span class="hlt">earthquake</span> magnitude. The obtained <span class="hlt">model</span> is close to the jointly inverted one. Our work demonstrates the unique usefulness of seismogeodesy for fast seismic hazard response.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70027279','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70027279"><span>Analysing the 1811-1812 New Madrid <span class="hlt">earthquakes</span> with recent instrumentally recorded aftershocks</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Mueller, K.; Hough, S.E.; Bilham, R.</p> <p>2004-01-01</p> <p>Although dynamic stress changes associated with the passage of seismic waves are thought to trigger <span class="hlt">earthquakes</span> at great distances, more than 60 per cent of all aftershocks appear to be triggered by static stress changes within two rupture lengths of a mainshock. The observed distribution of aftershocks may thus be used to infer details of mainshock rupture geometry. Aftershocks following large mid-continental <span class="hlt">earthquakes</span>, where background stressing <span class="hlt">rates</span> are low, are known to persist for centuries, and <span class="hlt">models</span> based on <span class="hlt">rate</span>-and-state friction laws provide theoretical support for this inference. Most past studies of the New Madrid <span class="hlt">earthquake</span> sequence have indeed assumed ongoing microseismicity to be a continuing aftershock sequence. Here we use instrumentally recorded aftershock locations and <span class="hlt">models</span> of elastic stress change to develop a kinematically consistent rupture scenario for three of the four largest <span class="hlt">earthquakes</span> of the 1811-1812 New Madrid sequence. Our results suggest that these three events occurred on two contiguous faults, producing lobes of increased stress near fault intersections and end points, in areas where present-day microearthquakes have been hitherto interpreted as evidence of primary mainshock rupture. We infer that the remaining New Madrid mainshock may have occurred more than 200 km north of this region in the Wabash Valley of southern Indiana and Illinois-an area that contains abundant modern microseismicity, and where substantial liquefaction was documented by historic accounts. Our results suggest that future large midplate <span class="hlt">earthquake</span> sequences may extend over a much broader region than previously suspected.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.G43A0918W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.G43A0918W"><span>A catalog of coseismic uniform-slip <span class="hlt">models</span> of geodetically unstudied <span class="hlt">earthquakes</span> along the Sumatran plate boundary</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wong, N. Z.; Feng, L.; Hill, E.</p> <p>2017-12-01</p> <p>The Sumatran plate boundary has experienced five Mw > 8 great <span class="hlt">earthquakes</span>, a handful of Mw 7-8 <span class="hlt">earthquakes</span> and numerous small to moderate events since the 2004 Mw 9.2 Sumatra-Andaman <span class="hlt">earthquake</span>. The geodetic studies of these moderate <span class="hlt">earthquakes</span> have mostly been passed over in favour of larger events. We therefore in this study present a catalog of coseismic uniform-slip <span class="hlt">models</span> of one Mw 7.2 <span class="hlt">earthquake</span> and 17 Mw 5.9-6.9 events that have mostly gone geodetically unstudied. These events occurred close to various continuous stations within the Sumatran GPS Array (SuGAr), allowing the network to record their surface deformation. However, due to their relatively small magnitudes, most of these moderate <span class="hlt">earthquakes</span> were recorded by only 1-4 GPS stations. With the limited observations per event, we first constrain most of the <span class="hlt">model</span> parameters (e.g. location, slip, patch size, strike, dip, rake) using various external sources (e.g., the ANSS catalog, gCMT, Slab1.0, and empirical relationships). We then use grid-search forward <span class="hlt">models</span> to explore a range of some of these parameters (geographic position for all events and additionally depth for some events). Our results indicate the gCMT centroid locations in the Sumatran subduction zone might be biased towards the west for smaller events, while ANSS epicentres might be biased towards the east. The more accurate locations of these events are potentially useful in understanding the nature of various structures along the megathrust, particularly the persistent rupture barriers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/of/1999/0311/report.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/of/1999/0311/report.pdf"><span>Subduction zone and crustal dynamics of western Washington; a tectonic <span class="hlt">model</span> for <span class="hlt">earthquake</span> hazards evaluation</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Stanley, Dal; Villaseñor, Antonio; Benz, Harley</p> <p>1999-01-01</p> <p>The Cascadia subduction zone is extremely complex in the western Washington region, involving local deformation of the subducting Juan de Fuca plate and complicated block structures in the crust. It has been postulated that the Cascadia subduction zone could be the source for a large thrust <span class="hlt">earthquake</span>, possibly as large as M9.0. Large intraplate <span class="hlt">earthquakes</span> from within the subducting Juan de Fuca plate beneath the Puget Sound region have accounted for most of the energy release in this century and future such large <span class="hlt">earthquakes</span> are expected. Added to these possible hazards is clear evidence for strong crustal deformation events in the Puget Sound region near faults such as the Seattle fault, which passes through the southern Seattle metropolitan area. In order to understand the nature of these individual <span class="hlt">earthquake</span> sources and their possible interrelationship, we have conducted an extensive seismotectonic study of the region. We have employed P-wave velocity <span class="hlt">models</span> developed using local <span class="hlt">earthquake</span> tomography as a key tool in this research. Other information utilized includes geological, paleoseismic, gravity, magnetic, magnetotelluric, deformation, seismicity, focal mechanism and geodetic data. Neotectonic concepts were tested and augmented through use of anelastic (creep) deformation <span class="hlt">models</span> based on thin-plate, finite-element techniques developed by Peter Bird, UCLA. These programs <span class="hlt">model</span> anelastic strain <span class="hlt">rate</span>, stress, and velocity fields for given rheological parameters, variable crust and lithosphere thicknesses, heat flow, and elevation. Known faults in western Washington and the main Cascadia subduction thrust were incorporated in the <span class="hlt">modeling</span> process. Significant results from the velocity <span class="hlt">models</span> include delineation of a previously studied arch in the subducting Juan de Fuca plate. The axis of the arch is oriented in the direction of current subduction and asymmetrically deformed due to the effects of a northern buttress mapped in the velocity <span class="hlt">models</span>. This</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70033472','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70033472"><span>Postearthquake relaxation after the 2004 M6 Parkfield, California, <span class="hlt">earthquake</span> and <span class="hlt">rate</span>-and-state friction</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Savage, J.C.; Langbein, J.</p> <p>2008-01-01</p> <p>An unusually complete set of measurements (including rapid <span class="hlt">rate</span> GPS over the first 10 days) of postseismic deformation is available at 12 continuous GPS stations located close to the epicenter of the 2004 M6.0 Parkfield <span class="hlt">earthquake</span>. The principal component modes for the relaxation of the ensemble of those 12 GPS stations were determined. The first mode alone furnishes an adequate approximation to the data. Thus, the relaxation at all stations can be represented by the product of a common temporal function and distinct amplitudes for each component (north or east) of relaxation at each station. The distribution in space of the amplitudes indicates that the relaxation is dominantly strike slip. The temporal function, which spans times from about 5 min to 900 days postearthquake, can be fit by a superposition of three creep terms, each of the form ??l loge(1 + t/??l), with characteristic times ??, = 4.06, 0.11, and 0.0001 days. It seems likely that what is actually involved is a broad spectrum of characteristic times, the individual components of which arise from afterslip on different fault patches. Perfettini and Avouac (2004) have shown that an individual creep term can be explained by the spring-slider <span class="hlt">model</span> with <span class="hlt">rate</span>-dependent (no state variable) friction. The observed temporal function can also be explained using a single spring-slider <span class="hlt">model</span> (i.e., single fault patch) that includes <span class="hlt">rate</span>-and-state-dependent friction, a single-state variable, and either of the two commonly used (aging and slip) state evolution laws. In the latter fits, the <span class="hlt">rate</span>-and-state friction parameter b is negative.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.S21D..05N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.S21D..05N"><span>Spatial Distribution of the Coefficient of Variation for the Paleo-<span class="hlt">Earthquakes</span> in Japan</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nomura, S.; Ogata, Y.</p> <p>2015-12-01</p> <p>Renewal processes, point prccesses in which intervals between consecutive events are independently and identically distributed, are frequently used to describe this repeating <span class="hlt">earthquake</span> mechanism and forecast the next <span class="hlt">earthquakes</span>. However, one of the difficulties in applying recurrent <span class="hlt">earthquake</span> <span class="hlt">models</span> is the scarcity of the historical data. Most studied fault segments have few, or only one observed <span class="hlt">earthquake</span> that often have poorly constrained historic and/or radiocarbon ages. The maximum likelihood estimate from such a small data set can have a large bias and error, which tends to yield high probability for the next event in a very short time span when the recurrence intervals have similar lengths. On the other hand, recurrence intervals at a fault depend on the long-term slip <span class="hlt">rate</span> caused by the tectonic motion in average. In addition, recurrence times are also fluctuated by nearby <span class="hlt">earthquakes</span> or fault activities which encourage or discourage surrounding seismicity. These factors have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus, this paper introduces a spatial structure on the key parameters of renewal processes for recurrent <span class="hlt">earthquakes</span> and estimates it by using spatial statistics. Spatial variation of mean and variance parameters of recurrence times are estimated in Bayesian framework and the next <span class="hlt">earthquakes</span> are forecasted by Bayesian predictive distributions. The proposal <span class="hlt">model</span> is applied for recurrent <span class="hlt">earthquake</span> catalog in Japan and its result is compared with the current forecast adopted by the <span class="hlt">Earthquake</span> Research Committee of Japan.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JGRB..120.2491D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JGRB..120.2491D"><span>Discrimination between induced, triggered, and natural <span class="hlt">earthquakes</span> close to hydrocarbon reservoirs: A probabilistic approach based on the <span class="hlt">modeling</span> of depletion-induced stress changes and seismological source parameters</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dahm, Torsten; Cesca, Simone; Hainzl, Sebastian; Braun, Thomas; Krüger, Frank</p> <p>2015-04-01</p> <p><span class="hlt">Earthquakes</span> occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural <span class="hlt">earthquakes</span>, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of <span class="hlt">earthquake</span> location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the <span class="hlt">modeling</span> of Coulomb stress changes and a <span class="hlt">rate</span> and state-dependent seismicity <span class="hlt">model</span>. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted <span class="hlt">rate</span> of natural <span class="hlt">earthquakes</span> is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress <span class="hlt">rate</span>. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the Mw 4.3 Ekofisk 2001, North Sea, <span class="hlt">earthquake</span> close to the Ekofisk oil field; (2) the Mw 4.4 Rotenburg 2004, Northern Germany, <span class="hlt">earthquake</span> in the vicinity of the Söhlingen gas field; and (3) the Mw 6</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1915410B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1915410B"><span>Sensing the <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bichisao, Marta; Stallone, Angela</p> <p>2017-04-01</p> <p>Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the <span class="hlt">earthquake</span> process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the <span class="hlt">earthquakes</span>. We try to implement these results into a choreographic <span class="hlt">model</span> with the aim to convert <span class="hlt">earthquake</span> sound to a visual dance system, which could return a transmedia representation of the <span class="hlt">earthquake</span> process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the <span class="hlt">earthquake</span> phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic <span class="hlt">model</span>. This artistic representation could provide an original entryway into the physics of <span class="hlt">earthquakes</span>.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006AGUFM.G31A..06H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006AGUFM.G31A..06H"><span>Characterization of a Strain <span class="hlt">Rate</span> Transient Along the San Andreas and San Jacinto Faults Following the October 1999 Hector Mine <span class="hlt">Earthquake</span>.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hernandez, D.; Holt, W. E.; Bennett, R. A.; Dimitrova, L.; Haines, A. J.</p> <p>2006-12-01</p> <p>We are continuing work on developing and refining a tool for recognizing strain <span class="hlt">rate</span> transients as well as for quantifying the magnitude and style of their temporal and spatial variations. We determined time-averaged velocity values in 0.05 year epochs using time-varying velocity estimates for continuous GPS station data from the Southern California Integrated GPS Network (SCIGN) for the time period between October 1999 and February 2004 [Li et al., 2005]. A self-consistent <span class="hlt">model</span> velocity gradient tensor field solution is determined for each epoch by fitting bi-cubic Bessel interpolation to the GPS velocity vectors and we determine <span class="hlt">model</span> dilatation strain <span class="hlt">rates</span>, shear strain <span class="hlt">rates</span>, and the rotation <span class="hlt">rates</span>. Departures of the time dependent <span class="hlt">model</span> strain <span class="hlt">rate</span> and velocity fields from a master solution, obtained from a time-averaged solution for the period 1999-2004, with imposed plate motion constraints and Quaternary fault data, are evaluated in order to best characterize the time dependent strain <span class="hlt">rate</span> field. A particular problem in determining the transient strain <span class="hlt">rate</span> fields is the level of smoothing or damping that is applied. Our current approach is to choose a damping that both maximizes the departure of the transient strain <span class="hlt">rate</span> field from the long-term master solution and achieves a reduced chi-squared value between <span class="hlt">model</span> and observed GPS velocities of around 1.0 for all time epochs. We observe several noteworthy time-dependent changes. First, in the Eastern California Shear Zone (ECSZ) region, immediately following the October 1999 Hector Mine <span class="hlt">earthquake</span>, there occurs a significant spatial increase of relatively high shear strain <span class="hlt">rate</span>, which encompasses a significant portion of the ECSZ. Second, also following the Hector Mine event, there is a strain <span class="hlt">rate</span> corridor that extends through the Pinto Mt. fault connecting the ECSZ to the San Andreas fault segment in the Salton Trough region. As this signal slowly decays, shear strain <span class="hlt">rates</span> on segments of the San</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70014702','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70014702"><span>Nucleation and triggering of <span class="hlt">earthquake</span> slip: effect of periodic stresses</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Dieterich, J.H.</p> <p>1987-01-01</p> <p>Results of stability analyses for spring and slider systems, with state variable constitutive properties, are applied to slip on embedded fault patches. Unstable slip may nucleate only if the slipping patch exceeds some minimum size. Subsequent to the onset of instability the <span class="hlt">earthquake</span> slip may propagate well beyond the patch. It is proposed that the seismicity of a volume of the earth's crust is determined by the distribution of initial conditions on the population of fault patches that nucleate <span class="hlt">earthquake</span> slip, and the loading history acting upon the volume. Patches with constitutive properties inferred from laboratory experiments are characterized by an interval of self-driven accelerating slip prior to instability, if initial stress exceeds a minimum threshold. This delayed instability of the patches provides an explanation for the occurrence of aftershocks and foreshocks including decay of <span class="hlt">earthquake</span> <span class="hlt">rates</span> by time-1. A population of patches subjected to loading with a periodic component results in periodic variation of the <span class="hlt">rate</span> of occurrence of instabilities. The change of the <span class="hlt">rate</span> of seismicity for a sinusoidal load is proportional to the amplitude of the periodic stress component and inversely proportional to both the normal stress acting on the fault patches and the constitutive parameter, A1, that controls the direct velocity dependence of fault slip. Values of A1 representative of laboratory experiments indicate that in a homogeneous crust, correlation of <span class="hlt">earthquake</span> <span class="hlt">rates</span> with earth tides should not be detectable at normal stresses in excess of about 8 MPa. Correlation of <span class="hlt">earthquakes</span> with tides at higher normal stresses can be explained if there exist inhomogeneities that locally amplify the magnitude of the tidal stresses. Such amplification might occur near magma chambers or other soft inclusions in the crust and possibly near the ends of creeping fault segments if the creep or afterslip <span class="hlt">rates</span> vary in response to tides. Observations of seismicity <span class="hlt">rate</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20150006936&hterms=machine+learning&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dmachine%2Blearning','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20150006936&hterms=machine+learning&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dmachine%2Blearning"><span>Integrating Machine Learning into a Crowdsourced <span class="hlt">Model</span> for <span class="hlt">Earthquake</span>-Induced Damage Assessment</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rebbapragada, Umaa; Oommen, Thomas</p> <p>2011-01-01</p> <p>On January 12th, 2010, a catastrophic 7.0M <span class="hlt">earthquake</span> devastated the country of Haiti. In the aftermath of an <span class="hlt">earthquake</span>, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising <span class="hlt">model</span> that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this <span class="hlt">model</span>. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this <span class="hlt">model</span> improves its reliability, maintains the speed of damage assessment, and allows the <span class="hlt">model</span> to scale to higher data volumes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70197880','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70197880"><span>2018 one‐year seismic hazard forecast for the central and eastern United States from induced and natural <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Petersen, Mark D.; Mueller, Charles; Moschetti, Morgan P.; Hoover, Susan M.; Rukstales, Kenneth S.; McNamara, Daniel E.; Williams, Robert A.; Shumway, Allison; Powers, Peter; Earle, Paul; Llenos, Andrea L.; Michael, Andrew J.; Rubinstein, Justin L.; Norbeck, Jack; Cochran, Elizabeth S.</p> <p>2018-01-01</p> <p>This article describes the U.S. Geological Survey (USGS) 2018 one‐year probabilistic seismic hazard forecast for the central and eastern United States from induced and natural <span class="hlt">earthquakes</span>. For consistency, the updated 2018 forecast is developed using the same probabilistic seismicity‐based methodology as applied in the two previous forecasts. <span class="hlt">Rates</span> of <span class="hlt">earthquakes</span> across the United States M≥3.0">M≥3.0 grew rapidly between 2008 and 2015 but have steadily declined over the past 3 years, especially in areas of Oklahoma and southern Kansas where fluid injection has decreased. The seismicity pattern in 2017 was complex with <span class="hlt">earthquakes</span> more spatially dispersed than in the previous years. Some areas of west‐central Oklahoma experienced increased activity <span class="hlt">rates</span> where industrial activity increased. <span class="hlt">Earthquake</span> <span class="hlt">rates</span> in Oklahoma (429 <span class="hlt">earthquakes</span> of M≥3">M≥3 and 4 M≥4">M≥4), Raton basin (Colorado/New Mexico border, six <span class="hlt">earthquakes</span> M≥3">M≥3), and the New Madrid seismic zone (11 <span class="hlt">earthquakes</span> M≥3">M≥3) continue to be higher than historical levels. Almost all of these <span class="hlt">earthquakes</span> occurred within the highest hazard regions of the 2017 forecast. Even though <span class="hlt">rates</span> declined over the past 3 years, the short‐term hazard for damaging ground shaking across much of Oklahoma remains at high levels due to continuing high <span class="hlt">rates</span> of smaller <span class="hlt">earthquakes</span> that are still hundreds of times higher than at any time in the state’s history. Fine details and variability between the 2016–2018 forecasts are obscured by significant uncertainties in the input <span class="hlt">model</span>. These short‐term hazard levels are similar to active regions in California. During 2017, M≥3">M≥3 <span class="hlt">earthquakes</span> also occurred in or near Ohio, West Virginia, Missouri, Kentucky, Tennessee, Arkansas, Illinois, Oklahoma, Kansas, Colorado, New Mexico, Utah, and Wyoming.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMPA21B1876H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMPA21B1876H"><span>Issues on the Japanese <span class="hlt">Earthquake</span> Hazard Evaluation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hashimoto, M.; Fukushima, Y.; Sagiya, T.</p> <p>2013-12-01</p> <p>The 2011 Great East Japan <span class="hlt">Earthquake</span> forced the policy of counter-measurements to <span class="hlt">earthquake</span> disasters, including <span class="hlt">earthquake</span> hazard evaluations, to be changed in Japan. Before the March 11, Japanese <span class="hlt">earthquake</span> hazard evaluation was based on the history of <span class="hlt">earthquakes</span> that repeatedly occurs and the characteristic <span class="hlt">earthquake</span> <span class="hlt">model</span>. The source region of an <span class="hlt">earthquake</span> was identified and its occurrence history was revealed. Then the conditional probability was estimated using the renewal <span class="hlt">model</span>. However, the Japanese authorities changed the policy after the megathrust <span class="hlt">earthquake</span> in 2011 such that the largest <span class="hlt">earthquake</span> in a specific seismic zone should be assumed on the basis of available scientific knowledge. According to this policy, three important reports were issued during these two years. First, the Central Disaster Management Council issued a new estimate of damages by a hypothetical Mw9 <span class="hlt">earthquake</span> along the Nankai trough during 2011 and 2012. The <span class="hlt">model</span> predicts a 34 m high tsunami on the southern Shikoku coast and intensity 6 or higher on the JMA scale in most area of Southwest Japan as the maximum. Next, the <span class="hlt">Earthquake</span> Research Council revised the long-term <span class="hlt">earthquake</span> hazard evaluation of <span class="hlt">earthquakes</span> along the Nankai trough in May 2013, which discarded the characteristic <span class="hlt">earthquake</span> <span class="hlt">model</span> and put much emphasis on the diversity of <span class="hlt">earthquakes</span>. The so-called 'Tokai' <span class="hlt">earthquake</span> was negated in this evaluation. Finally, another report by the CDMC concluded that, with the current knowledge, it is hard to predict the occurrence of large <span class="hlt">earthquakes</span> along the Nankai trough using the present techniques, based on the diversity of <span class="hlt">earthquake</span> phenomena. These reports created sensations throughout the country and local governments are struggling to prepare counter-measurements. These reports commented on large uncertainty in their evaluation near their ends, but are these messages transmitted properly to the public? <span class="hlt">Earthquake</span> scientists, including authors, are involved in</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.6279W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.6279W"><span>Fluid-driven normal faulting <span class="hlt">earthquake</span> sequences in the Taiwan orogen</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Ling-hua; Rau, Ruey-Juin; Lee, En-Jui</p> <p>2017-04-01</p> <p>Seismicity in the Central Range of Taiwan shows normal faulting mechanisms with T-axes directing NE, subparallel to the strike of the mountain belt. We analyze <span class="hlt">earthquake</span> sequences occurred within 2012-2015 in the Nanshan area of northern Taiwan which indicating swarm behavior and migration characteristics. We select events larger than 2.0 from Central Weather Bureau catalog and use the double-difference relocation program hypoDD with waveform cross-correlation in the Nanshan area. We obtained a final count of 1406 (95%) relocated <span class="hlt">earthquakes</span>. Moreover, we compute focal mechanisms using USGS program HASH by P-wave first motion and S/P ratio picking and 114 fault plane solutions with M 3.0-5.87 were determined. To test for fluid diffusion, we <span class="hlt">model</span> seismicity using the equation of Shapiro et al. (1997) by fitting <span class="hlt">earthquake</span> diffusing <span class="hlt">rate</span> D during the migration period. According to the relocation result, seismicity in the Taiwan orogenic belt present mostly N25E orientation parallel to the mountain belt with the same direction of the tension axis. In addition, another seismic fracture depicted by seismicity rotated 35 degree counterclockwise to the NW direction. Nearly all focal mechanisms are normal fault type. In the Nanshan area, events show N10W distribution with a focal depth range from 5-12 km and illustrate fault plane dipping about 45-60 degree to SW. Three months before the M 5.87 mainshock which occurred in March, 2013, there were some foreshock events occurred in the shallow part of the fault plane of the mainshock. Half a year following the mainshock, <span class="hlt">earthquakes</span> migrated to the north and south, respectively with processes matched the diffusion <span class="hlt">model</span> at a <span class="hlt">rate</span> of 0.2-0.6 m2/s. This migration pattern and diffusion <span class="hlt">rate</span> offer an evidence of 'fluid-driven' process in the fault zone. We also find the upward migration of <span class="hlt">earthquakes</span> in the mainshock source region. These phenomena are likely caused by the opening of the permeable conduit due to the M 5</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70030585','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70030585"><span>The 1906 <span class="hlt">earthquake</span> and a century of progress in understanding <span class="hlt">earthquakes</span> and their hazards</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Zoback, M.L.</p> <p>2006-01-01</p> <p>The 18 April 1906 San Francisco <span class="hlt">earthquake</span> killed nearly 3000 people and left 225,000 residents homeless. Three days after the <span class="hlt">earthquake</span>, an eight-person <span class="hlt">Earthquake</span> Investigation Commission composed of 25 geologists, seismologists, geodesists, biologists and engineers, as well as some 300 others started work under the supervision of Andrew Lawson to collect and document physical phenomena related to the quake . On 31 May 1906, the commission published a preliminary 17-page report titled "The Report of the State <span class="hlt">Earthquake</span> Investigation Commission". The report included the bulk of the geological and morphological descriptions of the faulting, detailed reports on shaking intensity, as well as an impressive atlas of 40 oversized maps and folios. Nearly 100 years after its publication, the Commission Report remains a <span class="hlt">model</span> for post-<span class="hlt">earthquake</span> investigations. Because the diverse data sets were so complete and carefully documented, researchers continue to apply modern analysis techniques to learn from the 1906 <span class="hlt">earthquake</span>. While the <span class="hlt">earthquake</span> marked a seminal event in the history of California, it served as impetus for the birth of modern <span class="hlt">earthquake</span> science in the United States.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70029210','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70029210"><span>Magnitude and location of historical <span class="hlt">earthquakes</span> in Japan and implications for the 1855 Ansei Edo <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Bakun, W.H.</p> <p>2005-01-01</p> <p>Japan Meteorological Agency (JMA) intensity assignments IJMA are used to derive intensity attenuation <span class="hlt">models</span> suitable for estimating the location and an intensity magnitude Mjma for historical <span class="hlt">earthquakes</span> in Japan. The intensity for shallow crustal <span class="hlt">earthquakes</span> on Honshu is equal to -1.89 + 1.42MJMA - 0.00887?? h - 1.66log??h, where MJMA is the JMA magnitude, ??h = (??2 + h2)1/2, and ?? and h are epicentral distance and focal depth (km), respectively. Four <span class="hlt">earthquakes</span> located near the Japan Trench were used to develop a subducting plate intensity attenuation <span class="hlt">model</span> where intensity is equal to -8.33 + 2.19MJMA -0.00550??h - 1.14 log ?? h. The IJMA assignments for the MJMA7.9 great 1923 Kanto <span class="hlt">earthquake</span> on the Philippine Sea-Eurasian plate interface are consistent with the subducting plate <span class="hlt">model</span>; Using the subducting plate <span class="hlt">model</span> and 226 IJMA IV-VI assignments, the location of the intensity center is 25 km north of the epicenter, Mjma is 7.7, and MJMA is 7.3-8.0 at the 1?? confidence level. Intensity assignments and reported aftershock activity for the enigmatic 11 November 1855 Ansei Edo <span class="hlt">earthquake</span> are consistent with an MJMA 7.2 Philippine Sea-Eurasian interplate source or Philippine Sea intraslab source at about 30 km depth. If the 1855 <span class="hlt">earthquake</span> was a Philippine Sea-Eurasian interplate event, the intensity center was adjacent to and downdip of the rupture area of the great 1923 Kanto <span class="hlt">earthquake</span>, suggesting that the 1855 and 1923 events ruptured adjoining sections of the Philippine Sea-Eurasian plate interface.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFMNH13B..08A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFMNH13B..08A"><span>Insights from interviews regarding high fatality <span class="hlt">rate</span> caused by the 2011 Tohoku-Oki <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ando, M.; Ishida, M.</p> <p>2012-12-01</p> <p>The 11 March 2011 Tohoku-Oki <span class="hlt">earthquake</span> (Mw9.0) caused approximately 19,000 casualties including missing persons along the entire coast of the Tohoku region. Three historical tsunamis occurred in the past 115 years preceding this tsunami. Since these tsunamis, numerous countermeasures against future tsunamis such as breakwaters, early tsunami warning systems and tsunami evacuation drills were implemented. Despite the preparedness, a number of deaths and missing persons occurred. Although this death <span class="hlt">rate</span> is approximately 4 % of the population in severely inundated areas; 96 % safely evacuated or managed to survive the tsunami. To understand why some people evacuated immediately while others delayed; survivors were interviewed in the northern part of the Tohoku region. Our interviews revealed that many residents obtained no appropriate warnings and many chose to remain in dangerous locations partly because they obtained the wrong idea of the risks. In addition, our interviews also indicated that the resultant high casualties were due to current technology malfunction, underestimated <span class="hlt">earthquake</span> size and tsunami heights, and failure of warning systems. Furthermore, the existing breakwaters provided the local community a false sense of security. The advanced technology did not work properly, especially at the time of the severe disaster. If residents had taken an immediate action after the major shaking stopped, most local residents might have survived considering that safer highlands are within 5 to 20 minute walking distance from the interviewed areas. However, the elderly and physically disabled people would still be in a much more difficult situation to walk such distance into safety. Nevertheless, even if these problems occur in future <span class="hlt">earthquakes</span>, better knowledge regarding <span class="hlt">earthquakes</span> and tsunami hazards could save more lives. People must take immediate action without waiting for official warning or help. To avoid similar high tsunami death ratios in the future</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGRB..122.5691W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGRB..122.5691W"><span>A 667 year record of coseismic and interseismic Coulomb stress changes in central Italy reveals the role of fault interaction in controlling irregular <span class="hlt">earthquake</span> recurrence intervals</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wedmore, L. N. J.; Faure Walker, J. P.; Roberts, G. P.; Sammonds, P. R.; McCaffrey, K. J. W.; Cowie, P. A.</p> <p>2017-07-01</p> <p>Current studies of fault interaction lack sufficiently long <span class="hlt">earthquake</span> records and measurements of fault slip <span class="hlt">rates</span> over multiple seismic cycles to fully investigate the effects of interseismic loading and coseismic stress changes on the surrounding fault network. We <span class="hlt">model</span> elastic interactions between 97 faults from 30 <span class="hlt">earthquakes</span> since 1349 A.D. in central Italy to investigate the relative importance of co-seismic stress changes versus interseismic stress accumulation for <span class="hlt">earthquake</span> occurrence and fault interaction. This region has an exceptionally long, 667 year record of historical <span class="hlt">earthquakes</span> and detailed constraints on the locations and slip <span class="hlt">rates</span> of its active normal faults. Of 21 <span class="hlt">earthquakes</span> since 1654, 20 events occurred on faults where combined coseismic and interseismic loading stresses were positive even though 20% of all faults are in "stress shadows" at any one time. Furthermore, the Coulomb stress on the faults that experience <span class="hlt">earthquakes</span> is statistically different from a random sequence of <span class="hlt">earthquakes</span> in the region. We show how coseismic Coulomb stress changes can alter <span class="hlt">earthquake</span> interevent times by 103 years, and fault length controls the intensity of this effect. Static Coulomb stress changes cause greater interevent perturbations on shorter faults in areas characterized by lower strain (or slip) <span class="hlt">rates</span>. The exceptional duration and number of <span class="hlt">earthquakes</span> we <span class="hlt">model</span> enable us to demonstrate the importance of combining long <span class="hlt">earthquake</span> records with detailed knowledge of fault geometries, slip <span class="hlt">rates</span>, and kinematics to understand the impact of stress changes in complex networks of active faults.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JGRB..119.8991B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JGRB..119.8991B"><span>A seismological <span class="hlt">model</span> for <span class="hlt">earthquakes</span> induced by fluid extraction from a subsurface reservoir</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.</p> <p>2014-12-01</p> <p>A seismological <span class="hlt">model</span> is developed for <span class="hlt">earthquakes</span> induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov (<link href="#jgrb50929-bib-0040"/>) and McGarr (<link href="#jgrb50929-bib-0045"/>) linking total strain to the summed seismic moment in an <span class="hlt">earthquake</span> catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving <span class="hlt">earthquake</span> catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (<link href="#jgrb50929-bib-0067"/>). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic <span class="hlt">earthquake</span> catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction <span class="hlt">model</span> for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for <span class="hlt">earthquake</span> event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14..314G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14..314G"><span><span class="hlt">Earthquake</span> Dynamics in Laboratory <span class="hlt">Model</span> and Simulation - Accelerated Creep as Precursor of Instabilities</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grzemba, B.; Popov, V. L.; Starcevic, J.; Popov, M.</p> <p>2012-04-01</p> <p>Shallow <span class="hlt">earthquakes</span> can be considered as a result of tribological instabilities, so called stick-slip behaviour [1,2], meaning that sudden slip occurs at already existing rupture zones. From a contact mechanics point of view it is clear, that no motion can arise completely sudden, the material will always creep in an existing contact in the load direction before breaking loose. If there is a measureable creep before the instability, this could serve as a precursor. To examine this theory in detail, we built up an elementary laboratory <span class="hlt">model</span> with pronounced stick-slip behaviour. Different material pairings, such as steel-steel, steel-glass and marble-granite, were analysed at different driving force <span class="hlt">rates</span>. The displacement was measured with a resolution of 8 nm. We were able to show that a measureable accelerated creep precedes the instability. Near the instability, this creep is sufficiently regular to serve as a basis for a highly accurate prediction of the onset of macroscopic slip [3]. In our <span class="hlt">model</span> a prediction is possible within the last few percents of the preceding stick time. We are hopeful to extend this period. Furthermore, we showed that the slow creep as well as the fast slip can be described very well by the Dieterich-Ruina-friction law, if we include the contribution of local contact rigidity. The simulation meets the experimental curves over five orders of magnitude. This friction law was originally formulated for rocks [4,5] and takes into account the dependency of the coefficient of friction on the sliding velocity and on the contact history. The simulations using the Dieterich-Ruina-friction law back up the observation of a universal behaviour of the creep's acceleration. We are working on several extensions of our <span class="hlt">model</span> to more dimensions in order to move closer towards representing a full three-dimensional continuum. The first step will be an extension to two degrees of freedom to analyse the interdependencies of the instabilities. We also plan</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT.......243A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT.......243A"><span><span class="hlt">Earthquakes</span> of the Nepal Himalaya: Towards a physical <span class="hlt">model</span> of the seismic cycle</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ader, Thomas J.</p> <p></p> <p> fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in <span class="hlt">earthquake</span> frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing <span class="hlt">rate</span>-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip <span class="hlt">rate</span> may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity <span class="hlt">rate</span> on the locked part of the MHT are the direct expressions of variations of the slip <span class="hlt">rate</span> on its creeping part, and no variations of the slip <span class="hlt">rate</span> have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider <span class="hlt">models</span> with <span class="hlt">rate</span>-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of <span class="hlt">Earthquakes</span> algorithm and examine the response of a 2D finite fault embedded with a <span class="hlt">rate</span>-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.S43B2788G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.S43B2788G"><span>Hybrid broadband Ground Motion simulation based on a dynamic rupture <span class="hlt">model</span> of the 2011 Mw 9.0 Tohoku <span class="hlt">earthquake</span>.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Galvez, P.; Somerville, P.; Bayless, J.; Dalguer, L. A.</p> <p>2015-12-01</p> <p>The rupture process of the 2011 Tohoku <span class="hlt">earthquake</span> exhibits depth-dependent variations in the frequency content of seismic radiation from the plate interface. This depth-varying rupture property has also been observed in other subduction zones (Lay et al, 2012). During the Tohoku <span class="hlt">earthquake</span>, the shallow region radiated coherent low frequency seismic waves whereas the deeper region radiated high frequency waves. Several kinematic inversions (Suzuki et al, 2011; Lee et al, 2011; Bletery et al, 2014; Minson et al, 2014) detected seismic waves below 0.1 Hz coming from the shallow depths that produced slip larger than 40-50 meters close to the trench. Using empirical green functions, Asano & Iwata (2012), Kurahashi and Irikura (2011) and others detected regions of strong ground motion radiation at frequencies up to 10Hz located mainly at the bottom of the plate interface. A recent dynamic <span class="hlt">model</span> that embodies this depth-dependent radiation using physical <span class="hlt">models</span> has been developed by Galvez et al (2014, 2015). In this <span class="hlt">model</span> the rupture process is <span class="hlt">modeled</span> using a linear weakening friction law with slip reactivation on the shallow region of the plate interface (Galvez et al, 2015). This <span class="hlt">model</span> reproduces the multiple seismic wave fronts recorded on the Kik-net seismic network along the Japanese coast up to 0.1 Hz as well as the GPS displacements. In the deep region, the rupture sequence is consistent with the sequence of the strong ground motion generation areas (SMGAs) that radiate high frequency ground motion at the bottom of the plate interface (Kurahashi and Irikura, 2013). It remains challenging to perform ground motions fully coupled with a dynamic rupture up to 10 Hz for a megathrust event. Therefore, to generate high frequency ground motions, we make use of the stochastic approach of Graves and Pitarka (2010) but add to the source spectrum the slip <span class="hlt">rate</span> function of the dynamic <span class="hlt">model</span>. In this hybrid-dynamic approach, the slip <span class="hlt">rate</span> function is windowed with Gaussian</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PApGe.174.3467G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PApGe.174.3467G"><span>Azimuthal Dependence of the Ground Motion Variability from Scenario <span class="hlt">Modeling</span> of the 2014 Mw6.0 South Napa, California, <span class="hlt">Earthquake</span> Using an Advanced Kinematic Source <span class="hlt">Model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gallovič, F.</p> <p>2017-09-01</p> <p>Strong ground motion simulations require physically plausible <span class="hlt">earthquake</span> source <span class="hlt">model</span>. Here, I present the application of such a kinematic <span class="hlt">model</span> introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The <span class="hlt">model</span> is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From <span class="hlt">earthquake</span> physics point of view, the <span class="hlt">model</span> includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source <span class="hlt">model</span> is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip <span class="hlt">rate</span> functions, not requiring any stochastic Green's functions. The source <span class="hlt">model</span> has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, <span class="hlt">earthquake</span>; the <span class="hlt">model</span> reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source <span class="hlt">model</span> is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple <span class="hlt">model</span> reproducing the azimuthal variations of the between-event ground motion</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..495..172R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..495..172R"><span>Preferential attachment in evolutionary <span class="hlt">earthquake</span> networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rezaei, Soghra; Moghaddasi, Hanieh; Darooneh, Amir Hossein</p> <p>2018-04-01</p> <p><span class="hlt">Earthquakes</span> as spatio-temporal complex systems have been recently studied using complex network theory. Seismic networks are dynamical networks due to addition of new seismic events over time leading to establishing new nodes and links to the network. Here we have constructed Iran and Italy seismic networks based on Hybrid <span class="hlt">Model</span> and testified the preferential attachment hypothesis for the connection of new nodes which states that it is more probable for newly added nodes to join the highly connected nodes comparing to the less connected ones. We showed that the preferential attachment is present in the case of <span class="hlt">earthquakes</span> network and the attachment <span class="hlt">rate</span> has a linear relationship with node degree. We have also found the seismic passive points, the most probable points to be influenced by other seismic places, using their preferential attachment values.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.S51A1910K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.S51A1910K"><span>3-D velocity structure <span class="hlt">model</span> for long-period ground motion simulation of the hypothetical Nankai <span class="hlt">Earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kagawa, T.; Petukhin, A.; Koketsu, K.; Miyake, H.; Murotani, S.; Tsurugi, M.</p> <p>2010-12-01</p> <p>Three dimensional velocity structure <span class="hlt">model</span> of southwest Japan is provided to simulate long-period ground motions due to the hypothetical subduction <span class="hlt">earthquakes</span>. The <span class="hlt">model</span> is constructed from numerous physical explorations conducted in land and offshore areas and observational study of natural <span class="hlt">earthquakes</span>. Any available information is involved to explain crustal structure and sedimentary structure. Figure 1 shows an example of cross section with P wave velocities. The <span class="hlt">model</span> has been revised through numbers of simulations of small to middle <span class="hlt">earthquakes</span> as to have good agreement with observed arrival times, amplitudes, and also waveforms including surface waves. Figure 2 shows a comparison between Observed (dash line) and simulated (solid line) waveforms. Low velocity layers have added on seismological basement to reproduce observed records. The thickness of the layer has been adjusted through iterative analysis. The final result is found to have good agreement with the results from other physical explorations; e.g. gravity anomaly. We are planning to make long-period (about 2 to 10 sec or longer) simulations of ground motion due to the hypothetical Nankai <span class="hlt">Earthquake</span> with the 3-D velocity structure <span class="hlt">model</span>. As the first step, we will simulate the observed ground motions of the latest event occurred in 1946 to check the source <span class="hlt">model</span> and newly developed velocity structure <span class="hlt">model</span>. This project is partly supported by Integrated Research Project for Long-Period Ground Motion Hazard Maps by Ministry of Education, Culture, Sports, Science and Technology (MEXT). The ground motion data used in this study were provided by National Research Institute for Earth Science and Disaster Prevention Disaster (NIED). Figure 1 An example of cross section with P wave velocities Figure 2 Observed (dash line) and simulated (solid line) waveforms due to a small <span class="hlt">earthquake</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://earthquake.usgs.gov/resources/software/slope_perf.php','USGSPUBS'); return false;" href="http://earthquake.usgs.gov/resources/software/slope_perf.php"><span>Java Programs for Using Newmark's Method and Simplified Decoupled Analysis to <span class="hlt">Model</span> Slope Performance During <span class="hlt">Earthquakes</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Jibson, Randall W.; Jibson, Matthew W.</p> <p>2003-01-01</p> <p>Landslides typically cause a large proportion of <span class="hlt">earthquake</span> damage, and the ability to predict slope performance during <span class="hlt">earthquakes</span> is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for <span class="hlt">modeling</span> a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to <span class="hlt">earthquake</span> shaking from a specific strong-motion record (<span class="hlt">earthquake</span> acceleration-time history). A modification of Newmark's method, decoupled analysis, allows <span class="hlt">modeling</span> landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified <span class="hlt">model</span> of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 <span class="hlt">earthquakes</span> are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to <span class="hlt">model</span> dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015NHESS..15.1873S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015NHESS..15.1873S"><span>Pre-<span class="hlt">earthquake</span> magnetic pulses</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Scoville, J.; Heraud, J.; Freund, F.</p> <p>2015-08-01</p> <p>A semiconductor <span class="hlt">model</span> of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to <span class="hlt">earthquakes</span>. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before <span class="hlt">earthquakes</span>. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future <span class="hlt">earthquakes</span>. We couple a drift-diffusion semiconductor <span class="hlt">model</span> to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before <span class="hlt">earthquakes</span>, and this suggests that the pulses could be the result of geophysical semiconductor processes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70030430','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70030430"><span>Regional intensity attenuation <span class="hlt">models</span> for France and the estimation of magnitude and location of historical <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Bakun, W.H.; Scotti, O.</p> <p>2006-01-01</p> <p>Intensity assignments for 33 calibration <span class="hlt">earthquakes</span> were used to develop intensity attenuation <span class="hlt">models</span> for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation <span class="hlt">models</span> are aggregated into a French stable continental region <span class="hlt">model</span> and the comparable Provence and Pyrenees region <span class="hlt">models</span> are aggregated into a Southern France <span class="hlt">model</span>. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 <span class="hlt">earthquake</span> in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) <span class="hlt">earthquake</span> near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical <span class="hlt">earthquakes</span>. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018Tectp.733..232A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018Tectp.733..232A"><span><span class="hlt">Earthquake</span> cycle simulations with <span class="hlt">rate</span>-and-state friction and power-law viscoelasticity</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Allison, Kali L.; Dunham, Eric M.</p> <p>2018-05-01</p> <p>We simulate <span class="hlt">earthquake</span> cycles with <span class="hlt">rate</span>-and-state fault friction and off-fault power-law viscoelasticity for the classic 2D antiplane shear problem of a vertical, strike-slip plate boundary fault. We investigate the interaction between fault slip and bulk viscous flow with experimentally-based flow laws for quartz-diorite and olivine for the crust and mantle, respectively. Simulations using three linear geotherms (dT/dz = 20, 25, and 30 K/km) produce different deformation styles at depth, ranging from significant interseismic fault creep to purely bulk viscous flow. However, they have almost identical <span class="hlt">earthquake</span> recurrence interval, nucleation depth, and down-dip coseismic slip limit. Despite these similarities, variations in the predicted surface deformation might permit discrimination of the deformation mechanism using geodetic observations. Additionally, in the 25 and 30 K/km simulations, the crust drags the mantle; the 20 K/km simulation also predicts this, except within 10 km of the fault where the reverse occurs. However, basal tractions play a minor role in the overall force balance of the lithosphere, at least for the flow laws used in our study. Therefore, the depth-integrated stress on the fault is balanced primarily by shear stress on vertical, fault-parallel planes. Because strain <span class="hlt">rates</span> are higher directly below the fault than far from it, stresses are also higher. Thus, the upper crust far from the fault bears a substantial part of the tectonic load, resulting in unrealistically high stresses. In the real Earth, this might lead to distributed plastic deformation or formation of subparallel faults. Alternatively, fault pore pressures in excess of hydrostatic and/or weakening mechanisms such as grain size reduction and thermo-mechanical coupling could lower the strength of the ductile fault root in the lower crust and, concomitantly, off-fault upper crustal stresses.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFM.S51A1399C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFM.S51A1399C"><span>Time-dependent <span class="hlt">earthquake</span> forecasting: Method and application to the Italian region</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chan, C.; Sorensen, M. B.; Grünthal, G.; Hakimhashemi, A.; Heidbach, O.; Stromeyer, D.; Bosse, C.</p> <p>2009-12-01</p> <p>We develop a new approach for time-dependent <span class="hlt">earthquake</span> forecasting and apply it to the Italian region. In our approach, the seismicity density is represented by a bandwidth function as a smoothing Kernel in the neighboring region of <span class="hlt">earthquakes</span>. To consider the fault-interaction-based forecasting, we calculate the Coulomb stress change imparted by each <span class="hlt">earthquake</span> in the study area. From this, the change of seismicity <span class="hlt">rate</span> as a function of time can be estimated by the concept of <span class="hlt">rate</span>-and-state stress transfer. We apply our approach to the region of Italy and <span class="hlt">earthquakes</span> that occurred before 2003 to generate the seismicity density. To validate our approach, we compare our estimated seismicity density with the distribution of <span class="hlt">earthquakes</span> with M≥3.8 after 2004. A positive correlation is found and all of the examined <span class="hlt">earthquakes</span> locate in the area of the highest 66 percentile of seismicity density in the study region. Furthermore, the seismicity density corresponding to the epicenter of the 2009 April 6, Mw = 6.3, L’Aquila <span class="hlt">earthquake</span> is in the area of the highest 5 percentile. For the time-dependent seismicity <span class="hlt">rate</span> change, we estimate the <span class="hlt">rate</span>-and-state stress transfer imparted by the M≥5.0 <span class="hlt">earthquakes</span> occurred in the past 50 years. It suggests that the seismicity <span class="hlt">rate</span> has increased at the locations of 65% of the examined <span class="hlt">earthquakes</span>. Applying this approach to the L’Aquila sequence by considering seven M≥5.0 aftershocks as well as the main shock, not only spatial but also temporal forecasting of the aftershock distribution is significant.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1910401S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1910401S"><span>Miocene to present deformation <span class="hlt">rates</span> in the Yakima Fold Province and implications for <span class="hlt">earthquake</span> hazards in central Washington State, USA</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Staisch, Lydia; Sherrod, Brian; Kelsey, Harvey; Blakely, Richard; Möller, Andreas; Styron, Richard</p> <p>2017-04-01</p> <p>The Yakima fold province (YFP), located in the Cascadia backarc of central Washington, is a region of active distributed deformation that accommodates NNE-SSW shortening. Geodetic data show modern strain accumulation of 2 mm/yr across this large-scale fold province. Deformation <span class="hlt">rates</span> on individual structures, however, are difficult to assess from GPS data given low strain <span class="hlt">rates</span> and the relatively short time period of geodetic observation. Geomorphic and geologic records, on the other hand, span sufficient time to investigate deformation <span class="hlt">rates</span> on the folds. Resolving fault geometries and slip <span class="hlt">rates</span> of the YFP is imperative to seismic hazard assessment for nearby infrastructure, including a large nuclear waste facility and hydroelectric dams along the Columbia and Yakima Rivers. We present new results on the timing and magnitude of deformation across several Yakima folds, including the Manastash Ridge, Umtanum Ridge, and Saddle Mountains anticlines. We constructed several line-balanced cross sections across the folds to calculated the magnitude of total shortening since Miocene time. To further constrain our structural <span class="hlt">models</span>, we include forward-<span class="hlt">modeling</span> of magnetic and gravity anomaly data. We estimate total shortening between 1.0 and 2.4 km across individual folds, decreasing eastward, consistent with geodetically and geologically measured clockwise rotation. Importantly, we find that thrust faults reactivate and invert normal faults in the basement, and do not appear to sole into a common décollement at shallow to mid-crustal depth. We constrain spatial and temporal variability in deformation <span class="hlt">rates</span> along the Saddle Mountains, Manastash Ridge and Umtanum Ridge anticlines using geomorphic and stratigraphic markers of topographic evolution. From stratigraphy and geochronology of growth strata along the Saddle Mountains we find that the <span class="hlt">rate</span> of deformation has increased up to six-fold since late Miocene time. To constrain deformation <span class="hlt">rates</span> along other Yakima folds</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/of/1999/0517/pdf/of99-517.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/of/1999/0517/pdf/of99-517.pdf"><span><span class="hlt">Earthquake</span> probabilities in the San Francisco Bay Region: 2000 to 2030 - a summary of findings</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>,</p> <p>1999-01-01</p> <p>The San Francisco Bay region sits astride a dangerous “<span class="hlt">earthquake</span> machine,” the tectonic boundary between the Pacific and North American Plates. The region has experienced major and destructive <span class="hlt">earthquakes</span> in 1838, 1868, 1906, and 1989, and future large <span class="hlt">earthquakes</span> are a certainty. The ability to prepare for large <span class="hlt">earthquakes</span> is critical to saving lives and reducing damage to property and infrastructure. An increased understanding of the timing, size, location, and effects of these likely <span class="hlt">earthquakes</span> is a necessary component in any effective program of preparedness. This study reports on the probabilities of occurrence of major <span class="hlt">earthquakes</span> in the San Francisco Bay region (SFBR) for the three decades 2000 to 2030. The SFBR extends from Healdsberg on the northwest to Salinas on the southeast and encloses the entire metropolitan area, including its most rapidly expanding urban and suburban areas. In this study a “major” <span class="hlt">earthquake</span> is defined as one with M≥6.7 (where M is moment magnitude). As experience from the Northridge, California (M6.7, 1994) and Kobe, Japan (M6.9, 1995) <span class="hlt">earthquakes</span> has shown us, <span class="hlt">earthquakes</span> of this size can have a disastrous impact on the social and economic fabric of densely urbanized areas. To reevaluate the probability of large <span class="hlt">earthquakes</span> striking the SFBR, the U.S. Geological Survey solicited data, interpretations, and analyses from dozens of scientists representing a wide crosssection of the Earth-science community (Appendix A). The primary approach of this new Working Group (WG99) was to develop a comprehensive, regional <span class="hlt">model</span> for the long-term occurrence of <span class="hlt">earthquakes</span>, founded on geologic and geophysical observations and constrained by plate tectonics. The <span class="hlt">model</span> considers a broad range of observations and their possible interpretations. Using this <span class="hlt">model</span>, we estimate the <span class="hlt">rates</span> of occurrence of <span class="hlt">earthquakes</span> and 30-year <span class="hlt">earthquake</span> probabilities. Our study considers a range of magnitudes for <span class="hlt">earthquakes</span> on the major faults in the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.S33B2089Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.S33B2089Z"><span>Purposes and methods of scoring <span class="hlt">earthquake</span> forecasts</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhuang, J.</p> <p>2010-12-01</p> <p>There are two kinds of purposes in the studies on <span class="hlt">earthquake</span> prediction or forecasts: one is to give a systematic estimation of <span class="hlt">earthquake</span> risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve <span class="hlt">earthquake</span> prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know <span class="hlt">model</span>, is necessary. This study reviews different scoring methods for evaluating the performance of <span class="hlt">earthquake</span> prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an <span class="hlt">earthquake</span> prediction algorithm or <span class="hlt">model</span> that are not in a reference <span class="hlt">model</span>, even if its overall performance is no better than the reference <span class="hlt">model</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoRL..45.1786P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoRL..45.1786P"><span>The Geological Susceptibility of Induced <span class="hlt">Earthquakes</span> in the Duvernay Play</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pawley, Steven; Schultz, Ryan; Playter, Tiffany; Corlett, Hilary; Shipman, Todd; Lyster, Steven; Hauck, Tyler</p> <p>2018-02-01</p> <p>Presently, consensus on the incorporation of induced <span class="hlt">earthquakes</span> into seismic hazard has yet to be established. For example, the nonstationary, spatiotemporal nature of induced <span class="hlt">earthquakes</span> is not well understood. Specific to the Western Canada Sedimentary Basin, geological bias in seismogenic activation potential has been suggested to control the spatial distribution of induced <span class="hlt">earthquakes</span> regionally. In this paper, we train a machine learning algorithm to systemically evaluate tectonic, geomechanical, and hydrological proxies suspected to control induced seismicity. Feature importance suggests that proximity to basement, in situ stress, proximity to fossil reef margins, lithium concentration, and <span class="hlt">rate</span> of natural seismicity are among the strongest <span class="hlt">model</span> predictors. Our derived seismogenic potential map faithfully reproduces the current distribution of induced seismicity and is suggestive of other regions which may be prone to induced <span class="hlt">earthquakes</span>. The refinement of induced seismicity geological susceptibility may become an important technique to identify significant underlying geological features and address induced seismic hazard forecasting issues.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70192250','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70192250"><span>The 2011 M = 9.0 Tohoku oki <span class="hlt">earthquake</span> more than doubled the probability of large shocks beneath Tokyo</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Toda, Shinji; Stein, Ross S.</p> <p>2013-01-01</p> <p>1] The Kanto seismic corridor surrounding Tokyo has hosted four to five M ≥ 7 <span class="hlt">earthquakes</span> in the past 400 years. Immediately after the Tohoku <span class="hlt">earthquake</span>, the seismicity <span class="hlt">rate</span> in the corridor jumped 10-fold, while the <span class="hlt">rate</span> of normal focal mechanisms dropped in half. The seismicity <span class="hlt">rate</span> decayed for 6–12 months, after which it steadied at three times the pre-Tohoku <span class="hlt">rate</span>. The seismicity <span class="hlt">rate</span> jump and decay to a new <span class="hlt">rate</span>, as well as the focal mechanism change, can be explained by the static stress imparted by the Tohoku rupture and postseismic creep to Kanto faults. We therefore fit the seismicity observations to a <span class="hlt">rate</span>/state Coulomb <span class="hlt">model</span>, which we use to forecast the time-dependent probability of large <span class="hlt">earthquakes</span> in the Kanto seismic corridor. We estimate a 17% probability of a M ≥ 7.0 shock over the 5 year prospective period 11 March 2013 to 10 March 2018, two-and-a-half times the probability had the Tohoku <span class="hlt">earthquake</span> not struck</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013GeoJI.193..914N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013GeoJI.193..914N"><span>Time-dependent <span class="hlt">earthquake</span> probability calculations for southern Kanto after the 2011 M9.0 Tohoku <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nanjo, K. Z.; Sakai, S.; Kato, A.; Tsuruoka, H.; Hirata, N.</p> <p>2013-05-01</p> <p>Seismicity in southern Kanto activated with the 2011 March 11 Tohoku <span class="hlt">earthquake</span> of magnitude M9.0, but does this cause a significant difference in the probability of more <span class="hlt">earthquakes</span> at the present or in the To? future answer this question, we examine the effect of a change in the seismicity <span class="hlt">rate</span> on the probability of <span class="hlt">earthquakes</span>. Our data set is from the Japan Meteorological Agency <span class="hlt">earthquake</span> catalogue, downloaded on 2012 May 30. Our approach is based on time-dependent <span class="hlt">earthquake</span> probabilistic calculations, often used for aftershock hazard assessment, and are based on two statistical laws: the Gutenberg-Richter (GR) frequency-magnitude law and the Omori-Utsu (OU) aftershock-decay law. We first confirm that the seismicity following a quake of M4 or larger is well <span class="hlt">modelled</span> by the GR law with b ˜ 1. Then, there is good agreement with the OU law with p ˜ 0.5, which indicates that the slow decay was notably significant. Based on these results, we then calculate the most probable estimates of future M6-7-class events for various periods, all with a starting date of 2012 May 30. The estimates are higher than pre-quake levels if we consider a period of 3-yr duration or shorter. However, for statistics-based forecasting such as this, errors that arise from parameter estimation must be considered. Taking into account the contribution of these errors to the probability calculations, we conclude that any increase in the probability of <span class="hlt">earthquakes</span> is insignificant. Although we try to avoid overstating the change in probability, our observations combined with results from previous studies support the likelihood that afterslip (fault creep) in southern Kanto will slowly relax a stress step caused by the Tohoku <span class="hlt">earthquake</span>. This afterslip in turn reminds us of the potential for stress redistribution to the surrounding regions. We note the importance of varying hazards not only in time but also in space to improve the probabilistic seismic hazard assessment for southern Kanto.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26643242','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26643242"><span>A contrast study of the traumatic condition between the wounded in 5.12 Wenchuan <span class="hlt">earthquake</span> and 4.25 Nepal <span class="hlt">earthquake</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ding, Sheng; Hu, Yonghe; Zhang, Zhongkui; Wang, Ting</p> <p>2015-01-01</p> <p>5.12 Wenchuan <span class="hlt">earthquake</span> and 4.25 Nepal <span class="hlt">earthquake</span> are of the similar magnitude, but the climate and geographic environment are totally different. Our team carried out medical rescue in both disasters, so we would like to compare the different traumatic conditions of the wounded in two <span class="hlt">earthquakes</span>. The clinical data of the wounded respectively in 5.12 Wenchuan <span class="hlt">earthquake</span> and 4.25 Nepal <span class="hlt">earthquake</span> rescued by Chengdu Military General Hospital were retrospectively analyzed. Then a contrast study between the wounded was conducted in terms of age, sex, injury mechanisms, traumatic conditions, complications and prognosis. Three days after 5.12 Wenchuan <span class="hlt">earthquake</span>, 465 cases of the wounded were hospitalized in Chengdu Military General Hospital, including 245 males (52.7%) and 220 females (47.3%) with the average age of (47.6±22.7) years. Our team carried out humanitarian relief in Katmandu after 4.25 Nepal <span class="hlt">earthquake</span>. Three days after this disaster, 71 cases were treated in our field hospital, including 37 males (52.1%) and 34 females (47.9%) with the mean age of (44.8±22.9) years. There was no obvious difference in sex and mean age between two groups, but the age distribution was a little different: there were more wounded people at the age over 60 years in 4.25 Nepal <span class="hlt">earthquake</span> (p<0.01) while more wounded people at the age between 21 and 60 years in 5.12 Wenchuan <span class="hlt">earthquake</span> (p<0.05). The main cause of injury in both disasters was bruise by heavy drops but 5.12 Wenchuan <span class="hlt">earthquake</span> had a higher <span class="hlt">rate</span> of bruise injury and crush injury (p<0.05) while 4.25 Nepal <span class="hlt">earthquake</span> had a higher <span class="hlt">rate</span> of falling injury (p<0.01). Limb fracture was the most common injury type in both disasters. However, compared with 5.12 Wenchuan <span class="hlt">earthquake</span>, 4.25 Nepal <span class="hlt">earthquake</span> has a much higher incidence of limb fractures (p<0.01), lung infection (p<0.01) and malnutrition (p<0.05), but a lower incidence of thoracic injury (p<0.05) and multiple injury (p<0.05). The other complications and death <span class="hlt">rate</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.S51A2648G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.S51A2648G"><span>Constraining Source Locations of Shallow Subduction Megathrust <span class="hlt">Earthquakes</span> in 1-D and 3-D Velocity <span class="hlt">Models</span> - A Case Study of the 2002 Mw=6.4 Osa <span class="hlt">Earthquake</span>, Costa Rica</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grevemeyer, I.; Arroyo, I. G.</p> <p>2015-12-01</p> <p><span class="hlt">Earthquake</span> source locations are generally routinely constrained using a global 1-D Earth <span class="hlt">model</span>. However, the source location might be associated with large uncertainties. This is definitively the case for <span class="hlt">earthquakes</span> occurring at active continental margins were thin oceanic crust subducts below thick continental crust and hence large lateral changes in crustal thickness occur as a function of distance to the deep-sea trench. Here, we conducted a case study of the 2002 Mw 6.4 Osa thrust <span class="hlt">earthquake</span> in Costa Rica that was followed by an aftershock sequence. Initial relocations indicated that the main shock occurred fairly trenchward of most large <span class="hlt">earthquakes</span> along the Middle America Trench off central Costa Rica. The <span class="hlt">earthquake</span> sequence occurred while a temporary network of ocean-bottom-hydrophones and land stations 80 km to the northwest were deployed. By adding readings from permanent Costa Rican stations, we obtain uncommon P wave coverage of a large subduction zone <span class="hlt">earthquake</span>. We relocated this catalog using a nonlinear probabilistic approach using a 1-D and two 3-D P-wave velocity <span class="hlt">models</span>. The 3-D <span class="hlt">model</span> was either derived from 3-D tomography based on onshore stations and a priori <span class="hlt">model</span> based on seismic refraction data. All epicentres occurred close to the trench axis, but depth estimates vary by several tens of kilometres. Based on the epicentres and constraints from seismic reflection data the main shock occurred 25 km from the trench and probably along the plate interface at 5-10 km depth. The source location that agreed best with the geology was based on the 3-D velocity <span class="hlt">model</span> derived from a priori data. Aftershocks propagated downdip to the area of a 1999 Mw 6.9 sequence and partially overlapped it. The results indicate that underthrusting of the young and buoyant Cocos Ridge has created conditions for interpolate seismogenesis shallower and closer to the trench axis than elsewhere along the central Costa Rica margin.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.S33E..05N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.S33E..05N"><span>A New Simplified Source <span class="hlt">Model</span> to Explain Strong Ground Motions from a Mega-Thrust <span class="hlt">Earthquake</span> - Application to the 2011 Tohoku <span class="hlt">Earthquake</span> (Mw9.0) -</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nozu, A.</p> <p>2013-12-01</p> <p>A new simplified source <span class="hlt">model</span> is proposed to explain strong ground motions from a mega-thrust <span class="hlt">earthquake</span>. The proposed <span class="hlt">model</span> is simpler, and involves less <span class="hlt">model</span> parameters, than the conventional characterized source <span class="hlt">model</span>, which itself is a simplified expression of actual <span class="hlt">earthquake</span> source. In the proposed <span class="hlt">model</span>, the spacio-temporal distribution of slip within a subevent is not <span class="hlt">modeled</span>. Instead, the source spectrum associated with the rupture of a subevent is <span class="hlt">modeled</span> and it is assumed to follow the omega-square <span class="hlt">model</span>. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source <span class="hlt">model</span> consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the <span class="hlt">model</span>, because the corner frequency of the subevent is included in the <span class="hlt">model</span>, which is inversely proportional to the length of the subevent. Thus, the proposed <span class="hlt">model</span> is referred to as the 'pseudo point-source <span class="hlt">model</span>'. To examine the applicability of the <span class="hlt">model</span>, a pseudo point-source <span class="hlt">model</span> was developed for the 2011 Tohoku <span class="hlt">earthquake</span>. The <span class="hlt">model</span> comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source <span class="hlt">model</span> agree well with the observed ones, indicating the applicability of the <span class="hlt">model</span>. Then the results were compared with the results of a super-asperity (SPGA) <span class="hlt">model</span> of the same <span class="hlt">earthquake</span> (Nozu, 2012, AGU), which can be considered as an</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.G31A0892X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.G31A0892X"><span>Coseismic gravitational potential energy changes induced by global <span class="hlt">earthquakes</span> during 1976 to 2016</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, C.; Chao, B. F.</p> <p>2017-12-01</p> <p>We compute the coseismic change in the gravitational potential energy Eg using the spherical-Earth elastic dislocation theory and either the fault <span class="hlt">model</span> treated as a point source or the finite fault <span class="hlt">model</span>. The <span class="hlt">rate</span> of the accumulative coseismic Eg loss produced by historical <span class="hlt">earthquakes</span> from 1976 to 2016 (about 4, 2000 events) using the GCMT catalogue are estimated to be on the order of -2.1×1020 J/a, or -6.7 TW (1 TW = 1012 watt), amounting to 15% in the total terrestrial heat flow. The energy loss is dominated by the thrust-faulting, especially the mega-thrust <span class="hlt">earthquakes</span> such as the 2004 Sumatra <span class="hlt">earthquake</span> (Mw 9.0) and the 2011 Tohoku-Oki <span class="hlt">earthquake</span> (Mw 9.1). It's notable that the very deep-focus <span class="hlt">earthquakes</span>, the 1994 Bolivia <span class="hlt">earthquake</span> (Mw 8.2) and the 2013 Okhotsk <span class="hlt">earthquake</span> (Mw 8.3), produced significant overall coseismic Eg gain according to our calculation. The accumulative coseismic Eg is mainly released in the mantle with a decrease tendency, and the core of the Earth also lost the coseismic Eg but with a relatively smaller magnitude. By contrast, the crust of the Earth gains Eg cumulatively because of the coseismic deformations. We further investigate the tectonic signature in these coseismic crustal gravitational potential energy changes in the complex tectonic zone, such as Taiwan region and the northeastern margin of Tibetan Plateau.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.S41C..08C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.S41C..08C"><span>Tectonic tremor activity associated with teleseismic and nearby <span class="hlt">earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chao, K.; Obara, K.; Peng, Z.; Pu, H. C.; Frank, W.; Prieto, G. A.; Wech, A.; Hsu, Y. J.; Yu, C.; Van der Lee, S.; Apley, D. W.</p> <p>2016-12-01</p> <p>Tectonic tremor is an extremely stress-sensitive seismic phenomenon located in the brittle-ductile transition section of a fault. To better understand the stress interaction between tremor and <span class="hlt">earthquake</span>, we conduct the following studies: (1) search for triggered tremor globally, (2) examine ambient tremor activities associated with distant <span class="hlt">earthquakes</span>, and (3) quantify the temporal variation of ambient tremor activity before and after nearby <span class="hlt">earthquakes</span>. First, we developed a Matlab toolbox to enhance the searching of triggered tremor globally. We have discovered new tremor sources in the inland faults in Kyushu, Kanto, and Hokkaido in Japan, southern Chile, Ecuador, and central Colombia in South America, and in South Italy. Our findings suggest that tremor is more common than previously believed and indicate the potential existence of ambient tremor in the triggered tremor active regions. Second, we adapt the statistical analysis to examine whether the long-term ambient tremor <span class="hlt">rate</span> may affect by the dynamic stress of teleseismic <span class="hlt">earthquakes</span>. We analyzed the data in Nankai, Hokkaido, Cascadia, and Taiwan. Our preliminary results did not show an apparent increase of ambient tremor <span class="hlt">rate</span> after the passing of surface waves. Third, we quantify temporal changes in ambient tremor activity before and after the occurrence of local <span class="hlt">earthquakes</span> under the southern Central Range of Taiwan with magnitudes of >=5.5 from 2004 to 2016. For a particular case, we found a temporal variation of tremor <span class="hlt">rate</span> before and after the 2010/03/04 Mw6.3 <span class="hlt">earthquake</span>, located about 20 km away from the active tremor source. The long-term increase in the tremor <span class="hlt">rate</span> after the <span class="hlt">earthquake</span> could have been caused by an increase in static stress following the mainshock. For comparison, clear evidence from seismic and GPS observations indicate a short-term increase in the tremor <span class="hlt">rate</span> a few weeks before the mainshock. The increase in the tremor <span class="hlt">rate</span> before the mainshock could correlate with stress changes</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.T41D..06K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.T41D..06K"><span>Comparison of Observed Spatio-temporal Aftershock Patterns with <span class="hlt">Earthquake</span> Simulator Results</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kroll, K.; Richards-Dinger, K. B.; Dieterich, J. H.</p> <p>2013-12-01</p> <p>Due to the complex nature of faulting in southern California, knowledge of rupture behavior near fault step-overs is of critical importance to properly quantify and mitigate seismic hazards. Estimates of <span class="hlt">earthquake</span> probability are complicated by the uncertainty that a rupture will stop at or jump a fault step-over, which affects both the magnitude and frequency of occurrence of <span class="hlt">earthquakes</span>. In recent years, <span class="hlt">earthquake</span> simulators and dynamic rupture <span class="hlt">models</span> have begun to address the effects of complex fault geometries on <span class="hlt">earthquake</span> ground motions and rupture propagation. Early <span class="hlt">models</span> incorporated vertical faults with highly simplified geometries. Many current studies examine the effects of varied fault geometry, fault step-overs, and fault bends on rupture patterns; however, these works are limited by the small numbers of integrated fault segments and simplified orientations. The previous work of Kroll et al., 2013 on the northern extent of the 2010 El Mayor-Cucapah rupture in the Yuha Desert region uses precise aftershock relocations to show an area of complex conjugate faulting within the step-over region between the Elsinore and Laguna Salada faults. Here, we employ an innovative approach of incorporating this fine-scale fault structure defined through seismological, geologic and geodetic means in the physics-based <span class="hlt">earthquake</span> simulator, RSQSim, to explore the effects of fine-scale structures on stress transfer and rupture propagation and examine the mechanisms that control aftershock activity and local triggering of other large events. We run simulations with primary fault structures in state of California and northern Baja California and incorporate complex secondary faults in the Yuha Desert region. These <span class="hlt">models</span> produce aftershock activity that enables comparison between the observed and predicted distribution and allow for examination of the mechanisms that control them. We investigate how the spatial and temporal distribution of aftershocks are affected by</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003AGUFM.S41B..01K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003AGUFM.S41B..01K"><span>Energy Partition and Variability of <span class="hlt">Earthquakes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kanamori, H.</p> <p>2003-12-01</p> <p> mechanically dissipated during faulting. In the context of the slip-weakening <span class="hlt">model</span>, EG can be estimated from Δ W0 and ER. Alternatively, EG can be estimated from the laboratory data on the surface energy, the grain size and the total volume of newly formed fault gouge. This method suggests that, for crustal <span class="hlt">earthquakes</span>, EG/E_R is very small, less than 0.2 even for extreme cases, for <span class="hlt">earthquakes</span> with MW>7. This is consistent with the EG estimated with seismological methods, and the fast rupture speeds during most large <span class="hlt">earthquakes</span>. For shallow subduction-zone <span class="hlt">earthquakes</span>, EG/E_R varies substantially depending on the tectonic environments. EH: Direct estimation of EH is difficult. However, even with modest friction, EH can be very large, enough to melt or even dissociate a significant amount of material near the slip zone for large events with large slip, and the associated thermal effects may have significant effects on fault dynamics. The energy partition varies significantly for different types of <span class="hlt">earthquakes</span>, e.g. large <span class="hlt">earthquakes</span> on mature faults, large <span class="hlt">earthquakes</span> on faults with low slip <span class="hlt">rates</span>, subduction-zone <span class="hlt">earthquakes</span>, deep focus <span class="hlt">earthquakes</span> etc; this variability manifests itself in the difference in the evolution of seismic slip pattern. The different behaviors will be illustrated using the examples for large <span class="hlt">earthquakes</span>, including, the 2001 Kunlun, the 1998 Balleny Is., the 1994 Bolivia, the 2001 India <span class="hlt">earthquake</span>, the 1999 Chi-Chi, and the 2002 Denali <span class="hlt">earthquakes</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S21B0700L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S21B0700L"><span>Comparison of aftershock sequences between 1975 Haicheng <span class="hlt">earthquake</span> and 1976 Tangshan <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, B.</p> <p>2017-12-01</p> <p>The 1975 ML 7.3 Haicheng <span class="hlt">earthquake</span> and the 1976 ML 7.8 Tangshan <span class="hlt">earthquake</span> occurred in the same tectonic unit. There are significant differences in spatial-temporal distribution, number of aftershocks and time duration for the aftershock sequence followed by these two main shocks. As we all know, aftershocks could be triggered by the regional seismicity change derived from the main shock, which was caused by the Coulomb stress perturbation. Based on the <span class="hlt">rate</span>- and state- dependent friction law, we quantitative estimated the possible aftershock time duration with a combination of seismicity data, and compared the results from different approaches. The results indicate that, aftershock time durations from the Tangshan main shock is several times of that form the Haicheng main shock. This can be explained by the significant relationship between aftershock time duration and <span class="hlt">earthquake</span> nucleation history, normal stressand shear stress loading rateon the fault. In fact the obvious difference of <span class="hlt">earthquake</span> nucleation history from these two main shocks is the foreshocks. 1975 Haicheng <span class="hlt">earthquake</span> has clear and long foreshocks, while 1976 Tangshan <span class="hlt">earthquake</span> did not have clear foreshocks. In that case, abundant foreshocks may mean a long and active nucleation process that may have changed (weakened) the rocks in the source regions, so they should have a shorter aftershock sequences for the reason that stress in weak rocks decay faster.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008AGUFM.T53E2009M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008AGUFM.T53E2009M"><span><span class="hlt">Earthquake</span> and Tsunami History and Hazards of Eastern Indonesia</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Major, J. R.; Robinson, J. S.; Harris, R. A.</p> <p>2008-12-01</p> <p>Western Indonesia (i.e. Java and Sumatra) has received much attention by geoscientists, especially in recent years due to events such as the Sumatra-Andaman event of 2004. However, the seismic history of eastern Indonesia is not widely known, notwithstanding the high <span class="hlt">rate</span> of seismic activity in the area and high convergence <span class="hlt">rates</span>. Not only do geologic hazards (i.e. strong <span class="hlt">earthquakes</span>, tsunami, and explosive volcanoes) comparable to those in western part of the country exist, but population has increased nearly 10 fold in the last century. Our historical research of <span class="hlt">earthquakes</span> and tsunami in eastern Indonesia based primarily on records of Dutch Colonists has uncovered a violent history of <span class="hlt">earthquakes</span> and tsunami from 1608 to 1877. During this time eastern Indonesia experienced over 30 significant <span class="hlt">earthquakes</span> and 35 tsunamis. Most of these events are much larger than any recorded in the last century. Due to this marked quiescence over the past century, and recent events in the Sunda arc over the past several years, we have initiated a new investigation of the region that integrates these historic events, field investigations, and, in the future, tsunami <span class="hlt">modeling</span>. A more complete and comprehensive seismic history of eastern Indonesia is necessary for effective risk assessment. This information, along with renewed efforts by scientists and government will be crucial for disaster mitigation and to save lives.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.G21A1012M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.G21A1012M"><span>Numerical <span class="hlt">Modeling</span> of Initial Slip and Poroelastic Effects of the 2012 Costa Rica <span class="hlt">Earthquake</span> Using GPS Data</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McCormack, K. A.; Hesse, M. A.; Stadler, G.</p> <p>2015-12-01</p> <p>Remote sensing and geodetic measurements are providing a new wealth of spatially distributed, time-series data that have the ability to improve our understanding of co-seismic rupture and post-seismic processes in subduction zones. We formulate a Bayesian inverse problem to infer the slip distribution on the plate interface using an elastic finite element <span class="hlt">model</span> and GPS surface deformation measurements. We present an application to the co-seismic displacement during the 2012 <span class="hlt">earthquake</span> on the Nicoya Peninsula in Costa Rica, which is uniquely positioned close to the Middle America Trench and directly over the seismogenic zone of the plate interface. The results of our inversion are then used as an initial condition in a coupled poroelastic forward <span class="hlt">model</span> to investigate the role of poroelastic effects on post-seismic deformation and stress transfer. From this study we identify a horseshoe-shaped rupture area with a maximum slip of approximately 2.5 meters surrounding a locked patch that is likely to release stress in the future. We <span class="hlt">model</span> the co-seismic pore pressure change as well as the pressure evolution and resulting deformation in the months after the <span class="hlt">earthquake</span>. The results of the forward <span class="hlt">model</span> indicate that <span class="hlt">earthquake</span>-induced pore pressure changes dissipate quickly near the surface, resulting in relaxation of the surface in the seven to ten days following the <span class="hlt">earthquake</span>. Near the subducting slab interface, pore pressure changes are an order of magnitude larger and may persist for many months after the <span class="hlt">earthquake</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ready.gov/earthquakes','NIH-MEDLINEPLUS'); return false;" href="https://www.ready.gov/earthquakes"><span><span class="hlt">Earthquakes</span></span></a></p> <p><a target="_blank" href="http://medlineplus.gov/">MedlinePlus</a></p> <p></p> <p></p> <p>... Search Term(s): Main Content Home Be Informed <span class="hlt">Earthquakes</span> <span class="hlt">Earthquakes</span> An <span class="hlt">earthquake</span> is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. <span class="hlt">Earthquakes</span> can cause buildings to collapse and cause heavy ...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.212..491R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.212..491R"><span><span class="hlt">Earthquake</span> focal mechanism forecasting in Italy for PSHA purposes</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Roselli, Pamela; Marzocchi, Warner; Mariucci, Maria Teresa; Montone, Paola</p> <p>2018-01-01</p> <p>In this paper, we put forward a procedure that aims to forecast focal mechanism of future <span class="hlt">earthquakes</span>. One of the primary uses of such forecasts is in probabilistic seismic hazard analysis (PSHA); in fact, aiming at reducing the epistemic uncertainty, most of the newer ground motion prediction equations consider, besides the seismicity <span class="hlt">rates</span>, the forecast of the focal mechanism of the next large <span class="hlt">earthquakes</span> as input data. The data set used to this purpose is relative to focal mechanisms taken from the latest stress map release for Italy containing 392 well-constrained solutions of events, from 1908 to 2015, with Mw ≥ 4 and depths from 0 down to 40 km. The data set considers polarity focal mechanism solutions until to 1975 (23 events), whereas for 1976-2015, it takes into account only the Centroid Moment Tensor (CMT)-like <span class="hlt">earthquake</span> focal solutions for data homogeneity. The forecasting <span class="hlt">model</span> is rooted in the Total Weighted Moment Tensor concept that weighs information of past focal mechanisms evenly distributed in space, according to their distance from the spatial cells and magnitude. Specifically, for each cell of a regular 0.1° × 0.1° spatial grid, the <span class="hlt">model</span> estimates the probability to observe a normal, reverse, or strike-slip fault plane solution for the next large <span class="hlt">earthquakes</span>, the expected moment tensor and the related maximum horizontal stress orientation. These results will be available for the new PSHA <span class="hlt">model</span> for Italy under development. Finally, to evaluate the reliability of the forecasts, we test them with an independent data set that consists of some of the strongest <span class="hlt">earthquakes</span> with Mw ≥ 3.9 occurred during 2016 in different Italian tectonic provinces.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.T43F..05G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.T43F..05G"><span>Strong Ground Motion Analysis and Afterslip <span class="hlt">Modeling</span> of <span class="hlt">Earthquakes</span> near Mendocino Triple Junction</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gong, J.; McGuire, J. J.</p> <p>2017-12-01</p> <p>The Mendocino Triple Junction (MTJ) is one of the most seismically active regions in North America in response to the ongoing motions between North America, Pacific and Gorda plates. <span class="hlt">Earthquakes</span> near the MTJ come from multiple types of faults due to the interaction boundaries between the three plates and the strong internal deformation within them. Understanding the stress levels that drive the <span class="hlt">earthquake</span> rupture on the various types of faults and estimating the locking state of the subduction interface are especially important for <span class="hlt">earthquake</span> hazard assessment. However due to lack of direct offshore seismic and geodetic records, only a few <span class="hlt">earthquakes</span>' rupture processes have been well studied and the locking state of the subducted slab is not well constrained. In this study we first use the second moment inversion method to study the rupture process of the January 28, 2015 Mw 5.7 strike slip <span class="hlt">earthquake</span> on Mendocino transform fault using strong ground motion records from Cascadia Initiative community experiment as well as onshore seismic networks. We estimate the rupture dimension to be of 6 km by 3 km and a stress drop of 7 MPa on the transform fault. Next we investigate the frictional locking state on the subduction interface through afterslip simulation based on coseismic rupture <span class="hlt">models</span> of this 2015 <span class="hlt">earthquake</span> and a Mw 6.5 intraplate eathquake inside Gorda plate whose slip distribution is inverted using onshore geodetic network in previous study. Different depths for velocity strengthening frictional properties to start at the downdip of the locked zone are used to simulate afterslip scenarios and predict the corresponding surface deformation (GPS) movements onshore. Our simulations indicate that locking depth on the slab surface is at least 14 km, which confirms that the next M8 <span class="hlt">earthquake</span> rupture will likely reach the coastline and strong shaking should be expected near the coast.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.T43A0661M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.T43A0661M"><span>New insight into the 1556 M8 Huaxian <span class="hlt">earthquake</span> in China</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ma, J.</p> <p>2017-12-01</p> <p>The disastrous 1556 M8 Huaxian <span class="hlt">earthquake</span> in China took away 0.8Ma lives then as well as attracted scientists' attention. Although the Huashan front fault and Weinan plateform-front fault at the south margin of Weihe basin was responsible for this <span class="hlt">earthquake</span>, we know less about the fault behaviors. There's evidence that the modern riverbank offset and older geomorphic scarps in Chishui river site on Weinan plateau-front fault from the Pleiades DEM. Here, we did a 3D trench excavation <span class="hlt">model</span> using SfM work, drilling profiles and geomorphological measurement there to revive the site for multiearthquakes. It turns out two events occurred on the normal fault with pretty high offsets 9.4m and 7.8-8.0m respectively, the later one resulted from Huaxian <span class="hlt">earthquake</span>. And we estimate that the fault slip <span class="hlt">rate</span> approximately 1.48-1.75 mm/a. Thus, we find that the older <span class="hlt">earthquake</span> also produced a similar fault offsets to the 1556 <span class="hlt">earthquake</span> showing as characteristics <span class="hlt">earthquake</span>. The paleoseismic study demonstrates that the Weinan pateform-front fault plays a role in boundary faults of Weihe basin, which can contribute to the basin evolution of regions of active faulting.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.T21A0552H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.T21A0552H"><span>Transient Viscoelastic Relaxation and Afterslip Immediately After the 2011 Mw9.0 Tohoku <span class="hlt">Earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hu, Y.; Burgmann, R.; Blewitt, G.; Freymueller, J. T.; Wang, K.</p> <p>2017-12-01</p> <p>It is well known that viscoelastic relaxation of the upper mantle and aseismic afterslip of the fault play important roles in controlling postseismic crustal deformation of giant <span class="hlt">earthquakes</span>. Thanks to modern geodetic observations, postseismic deformation at timescales of months to a few decades has been well studied. However, how the deformation hours to days following the <span class="hlt">earthquake</span> evolves into longer-term processes remains poorly understood. To investigate this problem, we processed high-<span class="hlt">rate</span> 5-minute GPS data of the GeoNET in Japan after the 2011 Mw9.0 Tohoku <span class="hlt">earthquake</span>. Some GPS stations moved more than 20 cm during the first day after the <span class="hlt">earthquake</span>. Such rapid deformation immediately after the <span class="hlt">earthquake</span> has been lumped into the coseismic offsets of the <span class="hlt">earthquake</span> in published studies. In this work, we have developed three-dimensional viscoelastic finite element <span class="hlt">models</span> to study the transient viscoelastic relaxation and evolution of the afterslip at scales from hours to years. In our <span class="hlt">model</span>, the viscoelastic relaxation is represented by the bi-viscous Burgers rheology. Steady-state Maxwell viscosities are based on previously published studies. Afterslip on the fault is <span class="hlt">modeled</span> by a narrow weak shear zone. Our preliminary tests indicate that the transient Kelvin viscosity is about two orders of magnitude lower than that of the steady-state Maxwell viscosity. Afterslip of the fault decays exponentially with time. In the first day after the <span class="hlt">earthquake</span>, the megathrust slipped aseismically for up to more than 50 cm.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1815130S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1815130S"><span>Prospectively Evaluating the Collaboratory for the Study of <span class="hlt">Earthquake</span> Predictability: An Evaluation of the UCERF2 and Updated Five-Year RELM Forecasts</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Strader, Anne; Schneider, Max; Schorlemmer, Danijel; Liukis, Maria</p> <p>2016-04-01</p> <p>The Collaboratory for the Study of <span class="hlt">Earthquake</span> Predictability (CSEP) was developed to rigorously test <span class="hlt">earthquake</span> forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective <span class="hlt">earthquake</span> mainshock forecasts developed by the Regional <span class="hlt">Earthquake</span> Likelihood <span class="hlt">Models</span> (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast <span class="hlt">models</span> passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these <span class="hlt">earthquake</span> forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California <span class="hlt">Earthquake</span> Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the <span class="hlt">earthquake</span> <span class="hlt">rate</span> <span class="hlt">model</span> developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) <span class="hlt">models</span> exhibit greater information gain per <span class="hlt">earthquake</span> according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of <span class="hlt">earthquakes</span> during the target period. Though there is no significant difference between the UCERF2 and NSHMP</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001JSeis...5..147D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001JSeis...5..147D"><span>Cyclic migration of weak <span class="hlt">earthquakes</span> between Lunigiana <span class="hlt">earthquake</span> of October 10, 1995 and Reggio Emilia <span class="hlt">earthquake</span> of October 15, 1996 (Northern Italy)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>di Giovambattista, R.; Tyupkin, Yu</p> <p></p> <p>The cyclic migration of weak <span class="hlt">earthquakes</span> (M 2.2) which occurred during the yearprior to the October 15, 1996 (M = 4.9) Reggio Emilia <span class="hlt">earthquake</span> isdiscussed in this paper. The onset of this migration was associated with theoccurrence of the October 10, 1995 (M = 4.8) Lunigiana earthquakeabout 90 km southwest from the epicenter of the Reggio Emiliaearthquake. At least three series of <span class="hlt">earthquakes</span> migrating from theepicentral area of the Lunigiana <span class="hlt">earthquake</span> in the northeast direction wereobserved. The migration of <span class="hlt">earthquakes</span> of the first series terminated at adistance of about 30 km from the epicenter of the Reggio Emiliaearthquake. The <span class="hlt">earthquake</span> migration of the other two series halted atabout 10 km from the Reggio Emilia epicenter. The average <span class="hlt">rate</span> ofearthquake migration was about 200-300 km/year, while the time ofrecurrence of the observed cycles varied from 68 to 178 days. Weakearthquakes migrated along the transversal fault zones and sometimesjumped from one fault to another. A correlation between the migratingearthquakes and tidal variations is analysed. We discuss the hypothesis thatthe analyzed area is in a state of stress approaching the limit of thelong-term durability of crustal rocks and that the observed cyclic migrationis a result of a combination of a more or less regular evolution of tectonicand tidal variations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002AGUFM.U62A..09J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002AGUFM.U62A..09J"><span><span class="hlt">Earthquake</span> Scaling Relations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jordan, T. H.; Boettcher, M.; Richardson, E.</p> <p>2002-12-01</p> <p>Using scaling relations to understand nonlinear geosystems has been an enduring theme of Don Turcotte's research. In particular, his studies of scaling in active fault systems have led to a series of insights about the underlying physics of <span class="hlt">earthquakes</span>. This presentation will review some recent progress in developing scaling relations for several key aspects of <span class="hlt">earthquake</span> behavior, including the inner and outer scales of dynamic fault rupture and the energetics of the rupture process. The proximate observations of mining-induced, friction-controlled events obtained from in-mine seismic networks have revealed a lower seismicity cutoff at a seismic moment Mmin near 109 Nm and a corresponding upper frequency cutoff near 200 Hz, which we interpret in terms of a critical slip distance for frictional drop of about 10-4 m. Above this cutoff, the apparent stress scales as M1/6 up to magnitudes of 4-5, consistent with other near-source studies in this magnitude range (see special session S07, this meeting). Such a relationship suggests a damage <span class="hlt">model</span> in which apparent fracture energy scales with the stress intensity factor at the crack tip. Under the assumption of constant stress drop, this <span class="hlt">model</span> implies an increase in rupture velocity with seismic moment, which successfully predicts the observed variation in corner frequency and maximum particle velocity. Global observations of oceanic transform faults (OTFs) allow us to investigate a situation where the outer scale of <span class="hlt">earthquake</span> size may be controlled by dynamics (as opposed to geologic heterogeneity). The seismicity data imply that the effective area for OTF moment release, AE, depends on the thermal state of the fault but is otherwise independent of fault's average slip <span class="hlt">rate</span>; i.e., AE ~ AT, where AT is the area above a reference isotherm. The data are consistent with β = 1/2 below an upper cutoff moment Mmax that increases with AT and yield the interesting scaling relation Amax ~ AT1/2. Taken together, the OTF</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.1569J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.1569J"><span>Identified EM <span class="hlt">Earthquake</span> Precursors</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jones, Kenneth, II; Saxton, Patrick</p> <p>2014-05-01</p> <p>Many attempts have been made to determine a sound forecasting method regarding <span class="hlt">earthquakes</span> and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based <span class="hlt">model</span> to an electromagnetic (EM) wave <span class="hlt">model</span>, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential <span class="hlt">earthquake</span> forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave <span class="hlt">model</span>. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..17.5707F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..17.5707F"><span>Time-Varying Upper-Plate Deformation during the Megathrust Subduction <span class="hlt">Earthquake</span> Cycle</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Furlong, Kevin P.; Govers, Rob; Herman, Matthew</p> <p>2015-04-01</p> <p>Over the past several decades of the WEGENER era, our abilities to observe and image the deformational behavior of the upper plate in megathrust subduction zones has dramatically improved. Several intriguing inferences can be made from these observations including apparent lateral variations in locking along subduction zones, which differs from interseismic to coseismic periods; the significant magnitude of post-<span class="hlt">earthquake</span> deformation (e.g. following the 20U14 Mw Iquique, Chile <span class="hlt">earthquake</span>, observed on-land GPS post-EQ displacements are comparable to the co-seismic displacements); and incompatibilities between <span class="hlt">rates</span> of slip deficit accumulation and resulting <span class="hlt">earthquake</span> co-seismic slip (e.g. pre-Tohoku, inferred <span class="hlt">rates</span> of slip deficit accumulation on the megathrust significantly exceed slip amounts for the ~ 1000 year recurrence.) <span class="hlt">Modeling</span> capabilities have grown from fitting simple elastic accumulation/rebound curves to sparse data to having spatially dense continuous time series that allow us to infer details of plate boundary coupling, rheology-driven transient deformation, and partitioning among inter-<span class="hlt">earthquake</span> and co-seismic displacements. In this research we utilize a 2D numerical <span class="hlt">modeling</span> to explore the time-varying deformational behavior of subduction zones during the <span class="hlt">earthquake</span> cycle with an emphasis on upper-plate and plate interface behavior. We have used a simplified <span class="hlt">model</span> configuration to isolate fundamental processes associated with the <span class="hlt">earthquake</span> cycle, rather than attempting to fit details of specific megathrust zones. Using a simple subduction geometry, but realistic rheologic layering we are evaluating the time-varying displacement and stress response through a multi-<span class="hlt">earthquake</span> cycle history. We use a simple <span class="hlt">model</span> configuration - an elastic subducting slab, an elastic upper plate (shallower than 40 km), and a visco-elastic upper plate (deeper than 40 km). This configuration leads to an upper plate that acts as a deforming elastic beam at inter-<span class="hlt">earthquake</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GeoJI.199.1655X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GeoJI.199.1655X"><span><span class="hlt">Earthquake</span>-origin expansion of the Earth inferred from a spherical-Earth elastic dislocation theory</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Changyi; Sun, Wenke</p> <p>2014-12-01</p> <p>In this paper, we propose an approach to compute the coseismic Earth's volume change based on a spherical-Earth elastic dislocation theory. We present a general expression of the Earth's volume change for three typical dislocations: the shear, tensile and explosion sources. We conduct a case study for the 2004 Sumatra <span class="hlt">earthquake</span> (Mw9.3), the 2010 Chile <span class="hlt">earthquake</span> (Mw8.8), the 2011 Tohoku-Oki <span class="hlt">earthquake</span> (Mw9.0) and the 2013 Okhotsk Sea <span class="hlt">earthquake</span> (Mw8.3). The results show that mega-thrust <span class="hlt">earthquakes</span> make the Earth expand and <span class="hlt">earthquakes</span> along a normal fault make the Earth contract. We compare the volume changes computed for finite fault <span class="hlt">models</span> and a point source of the 2011 Tohoku-Oki <span class="hlt">earthquake</span> (Mw9.0). The big difference of the results indicates that the coseismic changes in the Earth's volume (or the mean radius) are strongly dependent on the <span class="hlt">earthquakes</span>' focal mechanism, especially the depth and the dip angle. Then we estimate the cumulative volume changes by historical <span class="hlt">earthquakes</span> (Mw ≥ 7.0) since 1960, and obtain an Earth mean radius expanding <span class="hlt">rate</span> about 0.011 mm yr-1.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29593237','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29593237"><span>Long-range dependence in <span class="hlt">earthquake</span>-moment release and implications for <span class="hlt">earthquake</span> occurrence probability.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Barani, Simone; Mascandola, Claudia; Riccomagno, Eva; Spallarossa, Daniele; Albarello, Dario; Ferretti, Gabriele; Scafidi, Davide; Augliera, Paolo; Massa, Marco</p> <p>2018-03-28</p> <p>Since the beginning of the 1980s, when Mandelbrot observed that <span class="hlt">earthquakes</span> occur on 'fractal' self-similar sets, many studies have investigated the dynamical mechanisms that lead to self-similarities in the <span class="hlt">earthquake</span> process. Interpreting seismicity as a self-similar process is undoubtedly convenient to bypass the physical complexities related to the actual process. Self-similar processes are indeed invariant under suitable scaling of space and time. In this study, we show that long-range dependence is an inherent feature of the seismic process, and is universal. Examination of series of cumulative seismic moment both in Italy and worldwide through Hurst's rescaled range analysis shows that seismicity is a memory process with a Hurst exponent H ≈ 0.87. We observe that H is substantially space- and time-invariant, except in cases of catalog incompleteness. This has implications for <span class="hlt">earthquake</span> forecasting. Hence, we have developed a probability <span class="hlt">model</span> for <span class="hlt">earthquake</span> occurrence that allows for long-range dependence in the seismic process. Unlike the Poisson <span class="hlt">model</span>, dependent events are allowed. This <span class="hlt">model</span> can be easily transferred to other disciplines that deal with self-similar processes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMNH43A1826A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMNH43A1826A"><span>Field Investigations and a Tsunami <span class="hlt">Modeling</span> for the 1766 Marmara Sea <span class="hlt">Earthquake</span>, Turkey</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aykurt Vardar, H.; Altinok, Y.; Alpar, B.; Unlu, S.; Yalciner, A. C.</p> <p>2016-12-01</p> <p>Turkey is located on one of the world's most hazardous <span class="hlt">earthquake</span> zones. The northern branch of the North Anatolian fault beneath the Sea of Marmara, where the population is most concentrated, is the most active fault branch at least since late Pliocene. The Sea of Marmara region has been affected by many large tsunamigenic <span class="hlt">earthquakes</span>; the most destructive ones are 549, 553, 557, 740, 989, 1332, 1343, 1509, 1766, 1894, 1912 and 1999 events. In order to understand and determine the tsunami potential and their possible effects along the coasts of this inland sea, detailed documentary, geophysical and numerical <span class="hlt">modelling</span> studies are needed on the past <span class="hlt">earthquakes</span> and their associated tsunamis whose effects are presently unknown.On the northern coast of the Sea of Marmara region, the Kucukcekmece Lagoon has a high potential to trap and preserve tsunami deposits. Within the scope of this study, lithological content, composition and sources of organic matters in the lagoon's bottom sediments were studied along a 4.63 m-long piston core recovered from the SE margin of the lagoon. The sedimentary composition and possible sources of the organic matters along the core were analysed and their results were correlated with the historical events on the basis of dating results. Finally, a tsunami scenario was tested for May 22nd 1766 Marmara Sea <span class="hlt">Earthquake</span> by using a widely used tsunami simulation <span class="hlt">model</span> called NAMIDANCE. The results show that the candidate tsunami deposits at the depths of 180-200 cm below the lagoons bottom were related with the 1766 (May) <span class="hlt">earthquake</span>. This work was supported by the Scientific Research Projects Coordination Unit of Istanbul University (Project 6384) and by the EU project TRANSFER for coring.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S11B0578G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S11B0578G"><span>Conditional Probabilities of Large <span class="hlt">Earthquake</span> Sequences in California from the Physics-based Rupture Simulator RSQSim</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.</p> <p>2017-12-01</p> <p>Within the SCEC Collaboratory for Interseismic Simulation and <span class="hlt">Modeling</span> (CISM), we are developing physics-based forecasting <span class="hlt">models</span> for <span class="hlt">earthquake</span> ruptures in California. We employ the 3D boundary element code RSQSim (<span class="hlt">Rate</span>-State <span class="hlt">Earthquake</span> Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code <span class="hlt">models</span> rupture nucleation by <span class="hlt">rate</span>- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California <span class="hlt">Earthquake</span> Rupture Forecast Version 3 (UCERF3) fault and deformation <span class="hlt">models</span> are used to specify the fault geometry and long-term slip <span class="hlt">rates</span>. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., <span class="hlt">rate</span>- and state-frictional parameters), as well as the effects of <span class="hlt">model</span>-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence <span class="hlt">rates</span>). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.S31B2742G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.S31B2742G"><span>Source mechanism inversion and ground motion <span class="hlt">modeling</span> of induced <span class="hlt">earthquakes</span> in Kuwait - A Bayesian approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.</p> <p>2016-12-01</p> <p>The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced <span class="hlt">earthquakes</span> are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local <span class="hlt">earthquakes</span>. Since 1997, the KNSN has recorded more than 1000 <span class="hlt">earthquakes</span> (Mw < 5). In 2015, two local <span class="hlt">earthquakes</span> - Mw4.5 in 03/21/2015 and Mw4.1 in 08/18/2015 - have been recorded by both the Incorporated Research Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These <span class="hlt">earthquakes</span> happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The <span class="hlt">earthquakes</span> are generally small (Mw < 5) and are shallow with focal depths of about 2 to 4 km. Such events are very common in oil/gas reservoirs all over the world, including North America, Europe, and the Middle East. We determined the location and source mechanism of these local <span class="hlt">earthquakes</span>, with the uncertainties, using a Bayesian inversion method. The triggering stress of these <span class="hlt">earthquakes</span> was calculated based on the source mechanisms results. In addition, we <span class="hlt">modeled</span> the ground motion in Kuwait due to these local <span class="hlt">earthquakes</span>. Our results show that most likely these local <span class="hlt">earthquakes</span> occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced <span class="hlt">earthquakes</span> could produce ground accelerations high</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/tm/12b1/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/tm/12b1/"><span>SLAMMER: Seismic LAndslide Movement <span class="hlt">Modeled</span> using <span class="hlt">Earthquake</span> Records</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Jibson, Randall W.; Rathje, Ellen M.; Jibson, Matthew W.; Lee, Yong W.</p> <p>2013-01-01</p> <p>This program is designed to facilitate conducting sliding-block analysis (also called permanent-deformation analysis) of slopes in order to estimate slope behavior during <span class="hlt">earthquakes</span>. The program allows selection from among more than 2,100 strong-motion records from 28 <span class="hlt">earthquakes</span> and allows users to add their own records to the collection. Any number of <span class="hlt">earthquake</span> records can be selected using a search interface that selects records based on desired properties. Sliding-block analyses, using any combination of rigid-block (Newmark), decoupled, and fully coupled methods, are then conducted on the selected group of records, and results are compiled in both graphical and tabular form. Simplified methods for conducting each type of analysis are also included.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70073331','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70073331"><span>Local tsunamis and <span class="hlt">earthquake</span> source parameters</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Geist, Eric L.; Dmowska, Renata; Saltzman, Barry</p> <p>1999-01-01</p> <p>This chapter establishes the relationship among <span class="hlt">earthquake</span> source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the <span class="hlt">earthquake</span> rupture is <span class="hlt">modeled</span> using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual <span class="hlt">earthquake</span> source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic <span class="hlt">earthquakes</span> have indicated that the details of the <span class="hlt">earthquake</span> source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the <span class="hlt">earthquake</span>, which provide the initial conditions used for the hydrodynamic <span class="hlt">models</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.1900D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.1900D"><span>The costs and benefits of reconstruction options in Nepal using the CEDIM FDA <span class="hlt">modelled</span> and empirical analysis following the 2015 <span class="hlt">earthquake</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Daniell, James; Schaefer, Andreas; Wenzel, Friedemann; Khazai, Bijan; Girard, Trevor; Kunz-Plapp, Tina; Kunz, Michael; Muehr, Bernhard</p> <p>2016-04-01</p> <p>Over the days following the 2015 Nepal <span class="hlt">earthquake</span>, rapid loss estimates of deaths and the economic loss and reconstruction cost were undertaken by our research group in conjunction with the World Bank. This <span class="hlt">modelling</span> relied on historic losses from other Nepal <span class="hlt">earthquakes</span> as well as detailed socioeconomic data and <span class="hlt">earthquake</span> loss information via CATDAT. The <span class="hlt">modelled</span> results were very close to the final death toll and reconstruction cost for the 2015 <span class="hlt">earthquake</span> of around 9000 deaths and a direct building loss of ca. 3 billion (a). A description of the process undertaken to produce these loss estimates is described and the potential for use in analysing reconstruction costs from future Nepal <span class="hlt">earthquakes</span> in rapid time post-event. The reconstruction cost and death toll <span class="hlt">model</span> is then used as the base <span class="hlt">model</span> for the examination of the effect of spending money on <span class="hlt">earthquake</span> retrofitting of buildings versus complete reconstruction of buildings. This is undertaken future events using empirical statistics from past events along with further analytical <span class="hlt">modelling</span>. The effects of investment vs. the time of a future event is also explored. Preliminary low-cost options (b) along the line of other country studies for retrofitting (ca. 100) are examined versus the option of different building typologies in Nepal as well as investment in various sectors of construction. The effect of public vs. private capital expenditure post-<span class="hlt">earthquake</span> is also explored as part of this analysis, as well as spending on other components outside of <span class="hlt">earthquakes</span>. a) http://www.scientificamerican.com/article/experts-calculate-new-loss-predictions-for-nepal-quake/ b) http://www.aees.org.au/wp-content/uploads/2015/06/23-Daniell.pdf</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CG....111..244G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CG....111..244G"><span>Determining on-fault <span class="hlt">earthquake</span> magnitude distributions from integer programming</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Geist, Eric L.; Parsons, Tom</p> <p>2018-02-01</p> <p><span class="hlt">Earthquake</span> magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip <span class="hlt">rates</span> using binary integer programming. A synthetic <span class="hlt">earthquake</span> catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each <span class="hlt">earthquake</span> in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip <span class="hlt">rate</span> for each fault, where slip for each <span class="hlt">earthquake</span> is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip <span class="hlt">rates</span> provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an <span class="hlt">earthquake</span> can only be located on a fault if it is long enough to contain that <span class="hlt">earthquake</span>. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic <span class="hlt">earthquake</span> catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70197114','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70197114"><span>Determining on-fault <span class="hlt">earthquake</span> magnitude distributions from integer programming</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Geist, Eric L.; Parsons, Thomas E.</p> <p>2018-01-01</p> <p><span class="hlt">Earthquake</span> magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip <span class="hlt">rates</span> using binary integer programming. A synthetic <span class="hlt">earthquake</span> catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each <span class="hlt">earthquake</span> in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip <span class="hlt">rate</span> for each fault, where slip for each <span class="hlt">earthquake</span> is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip <span class="hlt">rates</span> provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an <span class="hlt">earthquake</span> can only be located on a fault if it is long enough to contain that <span class="hlt">earthquake</span>. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic <span class="hlt">earthquake</span> catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106  variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions. </p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFM.S11C..06G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFM.S11C..06G"><span>Active Faults and Seismic Sources of the Middle East Region: <span class="hlt">Earthquake</span> <span class="hlt">Model</span> of the Middle East (EMME) Project</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gulen, L.; EMME WP2 Team*</p> <p>2011-12-01</p> <p>The <span class="hlt">Earthquake</span> <span class="hlt">Model</span> of the Middle East (EMME) Project is a regional project of the GEM (Global <span class="hlt">Earthquake</span> <span class="hlt">Model</span>) project (http://www.emme-gem.org/). The EMME project covers Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major <span class="hlt">earthquakes</span> have occurred in this region over the years causing casualties in the millions. The EMME project consists of three main modules: hazard, risk, and socio-economic modules. The EMME project uses PSHA approach for <span class="hlt">earthquake</span> hazard and the existing source <span class="hlt">models</span> have been revised or modified by the incorporation of newly acquired data. The most distinguishing aspect of the EMME project from the previous ones is its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that permits continuous update, refinement, and analysis. An up-to-date <span class="hlt">earthquake</span> catalog of the Middle East region has been prepared and declustered by the WP1 team. EMME WP2 team has prepared a digital active fault map of the Middle East region in ArcGIS format. We have constructed a database of fault parameters for active faults that are capable of generating <span class="hlt">earthquakes</span> above a threshold magnitude of Mw≥5.5. The EMME project database includes information on the geometry and <span class="hlt">rates</span> of movement of faults in a "Fault Section Database", which contains 36 entries for each fault section. The "Fault Section" concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far 6,991 Fault Sections have been defined and 83,402 km of faults are fully parameterized in the Middle East region. A separate "Paleo-Sites Database" includes information on the timing and amounts of fault</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PApGe.174.4313C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PApGe.174.4313C"><span>Probabilistic <span class="hlt">Models</span> For <span class="hlt">Earthquakes</span> With Large Return Periods In Himalaya Region</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chaudhary, Chhavi; Sharma, Mukat Lal</p> <p>2017-12-01</p> <p>Determination of the frequency of large <span class="hlt">earthquakes</span> is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for <span class="hlt">modelling</span> the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of <span class="hlt">earthquake</span> occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large <span class="hlt">earthquakes</span> and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of <span class="hlt">earthquakes</span> have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <footer><a id="backToTop" href="#top"> </a><nav><a id="backToTop" href="#top"> </a><ul class="links"><a id="backToTop" href="#top"> </a><li><a id="backToTop" href="#top"></a><a href="/sitemap.html">Site Map</a></li> <li><a href="/members/index.html">Members Only</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://doe.responsibledisclosure.com/hc/en-us" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> <div class="small">Science.gov is maintained by the U.S. Department of Energy's <a href="https://www.osti.gov/" target="_blank">Office of Scientific and Technical Information</a>, in partnership with <a href="https://www.cendi.gov/" target="_blank">CENDI</a>.</div> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>