Statistical tests of simple earthquake cycle models
NASA Astrophysics Data System (ADS)
DeVries, Phoebe M. R.; Evans, Eileen L.
2016-12-01
A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.
Statistical tests of simple earthquake cycle models
Devries, Phoebe M. R.; Evans, Eileen
2016-01-01
A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM <~ 4.0 × 1019 Pa s and ηM >~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.
Earthquake cycles and physical modeling of the process leading up to a large earthquake
NASA Astrophysics Data System (ADS)
Ohnaka, Mitiyasu
2004-08-01
A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.
Viscoelastic-coupling model for the earthquake cycle driven from below
Savage, J.C.
2000-01-01
In a linear system the earthquake cycle can be represented as the sum of a solution which reproduces the earthquake cycle itself (viscoelastic-coupling model) and a solution that provides the driving force. We consider two cases, one in which the earthquake cycle is driven by stresses transmitted along the schizosphere and a second in which the cycle is driven from below by stresses transmitted along the upper mantle (i.e., the schizosphere and upper mantle, respectively, act as stress guides in the lithosphere). In both cases the driving stress is attributed to steady motion of the stress guide, and the upper crust is assumed to be elastic. The surface deformation that accumulates during the interseismic interval depends solely upon the earthquake-cycle solution (viscoelastic-coupling model) not upon the driving source solution. Thus geodetic observations of interseismic deformation are insensitive to the source of the driving forces in a linear system. In particular, the suggestion of Bourne et al. [1998] that the deformation that accumulates across a transform fault system in the interseismic interval is a replica of the deformation that accumulates in the upper mantle during the same interval does not appear to be correct for linear systems.
NASA Astrophysics Data System (ADS)
Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.
2015-12-01
Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity
Finite element models of earthquake cycles in mature strike-slip fault zones
NASA Astrophysics Data System (ADS)
Lynch, John Charles
The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a
NASA Astrophysics Data System (ADS)
Sobolev, Stephan V.; Muldashev, Iskander A.
2017-12-01
Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.
Geodetic Imaging of the Earthquake Cycle
NASA Astrophysics Data System (ADS)
Tong, Xiaopeng
In this dissertation I used Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) to recover crustal deformation caused by earthquake cycle processes. The studied areas span three different types of tectonic boundaries: a continental thrust earthquake (M7.9 Wenchuan, China) at the eastern margin of the Tibet plateau, a mega-thrust earthquake (M8.8 Maule, Chile) at the Chile subduction zone, and the interseismic deformation of the San Andreas Fault System (SAFS). A new L-band radar onboard a Japanese satellite ALOS allows us to image high-resolution surface deformation in vegetated areas, which is not possible with older C-band radar systems. In particular, both the Wenchuan and Maule InSAR analyses involved L-band ScanSAR interferometry which had not been attempted before. I integrated a large InSAR dataset with dense GPS networks over the entire SAFS. The integration approach features combining the long-wavelength deformation from GPS with the short-wavelength deformation from InSAR through a physical model. The recovered fine-scale surface deformation leads us to better understand the underlying earthquake cycle processes. The geodetic slip inversion reveals that the fault slip of the Wenchuan earthquake is maximum near the surface and decreases with depth. The coseismic slip model of the Maule earthquake constrains the down-dip extent of the fault slip to be at 45 km depth, similar to the Moho depth. I inverted for the slip rate on 51 major faults of the SAFS using Green's functions for a 3-dimensional earthquake cycle model that includes kinematically prescribed slip events for the past earthquakes since the year 1000. A 60 km thick plate model with effective viscosity of 10 19 Pa · s is preferred based on the geodetic and geological observations. The slip rates recovered from the plate models are compared to the half-space model. The InSAR observation reveals that the creeping section of the SAFS is partially locked. This high
Viscoelastic shear zone model of a strike-slip earthquake cycle
Pollitz, F.F.
2001-01-01
I examine the behavior of a two-dimensional (2-D) strike-slip fault system embedded in a 1-D elastic layer (schizosphere) overlying a uniform viscoelastic half-space (plastosphere) and within the boundaries of a finite width shear zone. The viscoelastic coupling model of Savage and Prescott [1978] considers the viscoelastic response of this system, in the absence of the shear zone boundaries, to an earthquake occurring within the upper elastic layer, steady slip beneath a prescribed depth, and the superposition of the responses of multiple earthquakes with characteristic slip occurring at regular intervals. So formulated, the viscoelastic coupling model predicts that sufficiently long after initiation of the system, (1) average fault-parallel velocity at any point is the average slip rate of that side of the fault and (2) far-field velocities equal the same constant rate. Because of the sensitivity to the mechanical properties of the schizosphere-plastosphere system (i.e., elastic layer thickness, plastosphere viscosity), this model has been used to infer such properties from measurements of interseismic velocity. Such inferences exploit the predicted behavior at a known time within the earthquake cycle. By modifying the viscoelastic coupling model to satisfy the additional constraint that the absolute velocity at prescribed shear zone boundaries is constant, I find that even though the time-averaged behavior remains the same, the spatiotemporal pattern of surface deformation (particularly its temporal variation within an earthquake cycle) is markedly different from that predicted by the conventional viscoelastic coupling model. These differences are magnified as plastosphere viscosity is reduced or as the recurrence interval of periodic earthquakes is lengthened. Application to the interseismic velocity field along the Mojave section of the San Andreas fault suggests that the region behaves mechanically like a ???600-km-wide shear zone accommodating 50 mm/yr fault
Models of recurrent strike-slip earthquake cycles and the state of crustal stress
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.
1991-01-01
Numerical models of the strike-slip earthquake cycle, assuming a viscoelastic asthenosphere coupling model, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and earthquake recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic earthquake cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The models further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. Models incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. Model results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault models, in agreement with previous estimates based on heat flow measurements and other stress indicators.
NASA Astrophysics Data System (ADS)
Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.
2017-12-01
We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the
Crustal deformation in great California earthquake cycles
NASA Technical Reports Server (NTRS)
Li, Victor C.; Rice, James R.
1986-01-01
Periodic crustal deformation associated with repeated strike slip earthquakes is computed for the following model: A depth L (less than or similiar to H) extending downward from the Earth's surface at a transform boundary between uniform elastic lithospheric plates of thickness H is locked between earthquakes. It slips an amount consistent with remote plate velocity V sub pl after each lapse of earthquake cycle time T sub cy. Lower portions of the fault zone at the boundary slip continuously so as to maintain constant resistive shear stress. The plates are coupled at their base to a Maxwellian viscoelastic asthenosphere through which steady deep seated mantle motions, compatible with plate velocity, are transmitted to the surface plates. The coupling is described approximately through a generalized Elsasser model. It is argued that the model gives a more realistic physical description of tectonic loading, including the time dependence of deep slip and crustal stress build up throughout the earthquake cycle, than do simpler kinematic models in which loading is represented as imposed uniform dislocation slip on the fault below the locked zone.
Crustal deformation in Great California Earthquake cycles
NASA Technical Reports Server (NTRS)
Li, Victor C.; Rice, James R.
1987-01-01
A model in which coupling is described approximately through a generalized Elsasser model is proposed for computation of the periodic crustal deformation associated with repeated strike-slip earthquakes. The model is found to provide a more realistic physical description of tectonic loading than do simpler kinematic models. Parameters are chosen to model the 1857 and 1906 San Andreas ruptures, and predictions are found to be consistent with data on variations of contemporary surface strain and displacement rates as a function of distance from the 1857 and 1906 rupture traces. Results indicate that the asthenosphere appropriate to describe crustal deformation on the earthquake cycle time scale lies in the lower crust and perhaps the crust-mantle transition zone.
NASA Astrophysics Data System (ADS)
Hearn, E. H.
2013-12-01
Geodetic surface velocity data show that after an energetic but brief phase of postseismic deformation, surface deformation around most major strike-slip faults tends to be localized and stationary, and can be modeled with a buried elastic dislocation creeping at or near the Holocene slip rate. Earthquake-cycle models incorporating an elastic layer over a Maxwell viscoelastic halfspace cannot explain this, even when the earliest postseismic deformation is ignored or modeled (e.g., as frictional afterslip). Models with heterogeneously distributed low-viscosity materials or power-law rheologies perform better, but to explain all phases of earthquake-cycle deformation, Burgers viscoelastic materials with extreme differences between their Maxwell and Kelvin element viscosities seem to be required. I present a suite of earthquake-cycle models to show that postseismic and interseismic deformation may be reconciled for a range of lithosphere architectures and rheologies if finite rupture length is taken into account. These models incorporate high-viscosity lithosphere optionally cut by a viscous shear zone, and a lower-viscosity mantle asthenosphere (all with a range of viscoelastic rheologies and parameters). Characteristic earthquakes with Mw = 7.0 - 7.9 are investigated, with interseismic intervals adjusted to maintain the same slip rate (10, 20 or 40 mm/yr). I find that a high-viscosity lower crust/uppermost mantle (or a high viscosity per unit width viscous shear zone at these depths) is required for localized and stationary interseismic deformation. For Mw = 7.9 characteristic earthquakes, the shear zone viscosity per unit width in the lower crust and uppermost mantle must exceed about 10^16 Pa s /m. For a layered viscoelastic model the lower crust and uppermost mantle effective viscosity must exceed about 10^20 Pa s. The range of admissible shear zone and lower lithosphere rheologies broadens considerably for faults producing more frequent but smaller
The Bay Area Earthquake Cycle:A Paleoseismic Perspective
NASA Astrophysics Data System (ADS)
Schwartz, D. P.; Seitz, G.; Lienkaemper, J. J.; Dawson, T. E.; Hecker, S.; William, L.; Kelson, K.
2001-12-01
/SH/RC/NC/(SG?) sequence likely occurred subsequent to the penultimate San Andreas event. Although offset data, which reflect M, are limited, observations indicate that the penultimate SA event ruptured essentially the same fault length as 1906 (Schwartz et al, 1998). In addition, measured point-specific slip (RC, 1.8-2.3m; SG, 3.5-5m) and modeled average slip (SH, 1.9m) for the MREs indicate large magnitude earthquakes on the other regional faults. The major observation from the new paleoseismic data is that during a maximum interval of 176 years (1600 to 1776), significant seismic moment was released in the SFBR by large (M*6.7) surface-faulting earthquakes on the SA, RC, SH, NH, NC and possibly SG faults. This places an upper limit on the duration of San Andreas interaction effects (stress shadow) on the regional fault system. In fact, the interval between the penultimate San Andreas rupture and large earthquakes on other SFBR faults could have been considerably shorter. We are now 95 years out from the 1906 and the SFBR Working Group 99 probability time window extends to 2030, an interval of 124 years. The paleoearthquake data allow that within this amount of time following the penultimate San Andreas event one or more large earthquakes may have occurred on Bay Area faults. Longer paleoearthquake chronologies with more precise event dating in the SFBR and other locales provide the exciting potential for defining regional earthquake cycles and modeling long-term fault interactions.
Dynamics of folding: Impact of fault bend folds on earthquake cycles
NASA Astrophysics Data System (ADS)
Sathiakumar, S.; Barbot, S.; Hubbard, J.
2017-12-01
Earthquakes in subduction zones and subaerial convergent margins are some of the largest in the world. So far, forecasts of future earthquakes have primarily relied on assessing past earthquakes to look for seismic gaps and slip deficits. However, the roles of fault geometry and off-fault plasticity are typically overlooked. We use structural geology (fault-bend folding theory) to inform fault modeling in order to better understand how deformation is accommodated on the geological time scale and through the earthquake cycle. Fault bends in megathrusts, like those proposed for the Nepal Himalaya, will induce folding of the upper plate. This introduces changes in the slip rate on different fault segments, and therefore on the loading rate at the plate interface, profoundly affecting the pattern of earthquake cycles. We develop numerical simulations of slip evolution under rate-and-state friction and show that this effect introduces segmentation of the earthquake cycle. In crustal dynamics, it is challenging to describe the dynamics of fault-bend folds, because the deformation is accommodated by small amounts of slip parallel to bedding planes ("flexural slip"), localized on axial surface, i.e. folding axes pinned to fault bends. We use dislocation theory to describe the dynamics of folding along these axial surfaces, using analytic solutions that provide displacement and stress kernels to simulate the temporal evolution of folding and assess the effects of folding on earthquake cycles. Studies of the 2015 Gorkha earthquake, Nepal, have shown that fault geometry can affect earthquake segmentation. Here, we show that in addition to the fault geometry, the actual geology of the rocks in the hanging wall of the fault also affect critical parameters, including the loading rate on parts of the fault, based on fault-bend folding theory. Because loading velocity controls the recurrence time of earthquakes, these two effects together are likely to have a strong impact on the
Earthquake cycle deformation in the Tibetan plateau with a weak mid-crustal layer
NASA Astrophysics Data System (ADS)
DeVries, Phoebe M. R.; Meade, Brendan J.
2013-06-01
observations of interseismic deformation across the Tibetan plateau contain information about both tectonic and earthquake cycle processes. Time-variations in surface velocities between large earthquakes are sensitive to the rheological structure of the subseismogenic crust, and, in particular, the viscosity of the middle and lower crust. Here we develop a semianalytic solution for time-dependent interseismic velocities resulting from viscoelastic stress relaxation in a localized midcrustal layer in response to forcing by a sequence of periodic earthquakes. Earthquake cycle models with a weak midcrustal layer exhibit substantially more near-fault preseismic strain localization than do classic two-layer models at short (<100 yr) Maxwell times. We apply both this three-layer model and the classic two-layer model to geodetic observations before and after the 1997 MW = 7.6 Manyi and 2001 MW = 7.8 Kokoxili strike-slip earthquakes in Tibet to estimate the viscosity of the crust below a 20 km thick seismogenic layer. For these events, interseismic stress relaxation in a weak (viscosity ≤1018.5 Paṡs) and thin (height ≤20 km) midcrustal layer explains observations of both preseismic near-fault strain localization and rapid (>50 mm/yr) postseismic velocities in the years following the coseismic ruptures. We suggest that earthquake cycle models with a localized midcrustal layer can simultaneously explain both preseismic and postseismic geodetic observations with a single Maxwell viscosity, while the classic two-layer model requires a rheology with multiple relaxation time scales.
Remote monitoring of the earthquake cycle using satellite radar interferometry.
Wright, Tim J
2002-12-15
The earthquake cycle is poorly understood. Earthquakes continue to occur on previously unrecognized faults. Earthquake prediction seems impossible. These remain the facts despite nearly 100 years of intensive study since the earthquake cycle was first conceptualized. Using data acquired from satellites in orbit 800 km above the Earth, a new technique, radar interferometry (InSAR), has the potential to solve these problems. For the first time, detailed maps of the warping of the Earth's surface during the earthquake cycle can be obtained with a spatial resolution of a few tens of metres and a precision of a few millimetres. InSAR does not need equipment on the ground or expensive field campaigns, so it can gather crucial data on earthquakes and the seismic cycle from some of the remotest areas of the planet. In this article, I review some of the remarkable observations of the earthquake cycle already made using radar interferometry and speculate on breakthroughs that are tantalizingly close.
Modeling, Forecasting and Mitigating Extreme Earthquakes
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.
2012-12-01
Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).
NASA Astrophysics Data System (ADS)
Dolan, James F.; Meade, Brendan J.
2017-12-01
Comparison of preevent geodetic and geologic rates in three large-magnitude (Mw = 7.6-7.9) strike-slip earthquakes reveals a wide range of behaviors. Specifically, geodetic rates of 26-28 mm/yr for the North Anatolian fault along the 1999 MW = 7.6 Izmit rupture are ˜40% faster than Holocene geologic rates. In contrast, geodetic rates of ˜6-8 mm/yr along the Denali fault prior to the 2002 MW = 7.9 Denali earthquake are only approximately half as fast as the latest Pleistocene-Holocene geologic rate of ˜12 mm/yr. In the third example where a sufficiently long pre-earthquake geodetic time series exists, the geodetic and geologic rates along the 2001 MW = 7.8 Kokoxili rupture on the Kunlun fault are approximately equal at ˜11 mm/yr. These results are not readily explicable with extant earthquake-cycle modeling, suggesting that they may instead be due to some combination of regional kinematic fault interactions, temporal variations in the strength of lithospheric-scale shear zones, and/or variations in local relative plate motion rate. Whatever the exact causes of these variable behaviors, these observations indicate that either the ratio of geodetic to geologic rates before an earthquake may not be diagnostic of the time to the next earthquake, as predicted by many rheologically based geodynamic models of earthquake-cycle behavior, or different behaviors characterize different fault systems in a manner that is not yet understood or predictable.
Aseismic Slip Throughout the Earthquake Cycle in Nicoya Peninsula, Costa Rica
NASA Astrophysics Data System (ADS)
Voss, N. K.; Liu, Z.; Hobbs, T. E.; Schwartz, S. Y.; Malservisi, R.; Dixon, T. H.; Protti, M.
2017-12-01
Geodetically resolved Slow Slip Events (SSE), a large M7.6 earthquake, and afterslip have all been documented in the last 16 years of observation in Nicoya, Costa Rica. We present a synthesis of the observations of observed aseismic slip behavior. SSEs in Nicoya are observed both during the late inter-seismic period and the post-seismic period, despite ongoing post-seismic phenomena. While recurrence rates appear unchanged by position within earthquake cycle, SSE behavior does vary before and after the event. We discuss how afterslip may be responsible for this change in behavior. We also present observations of a pre-earthquake transient observed starting 6 months prior to the M7.6 megathrust earthquake. This earthquake takes place within an asperity that is surrounded by regions which previously underwent slow slip behavior. We compare how this pre-earthquake transient, modeled as aseismic slip, differs from observations of typical Nicoya SSEs. Finally, we attempt to explain the segmentation of behaviors in Costa Rica with a simple frictional model.
NASA Astrophysics Data System (ADS)
Newman, W. I.; Turcotte, D. L.
2002-12-01
We have studied a hybrid model combining the forest-fire model with the site-percolation model in order to better understand the earthquake cycle. We consider a square array of sites. At each time step, a "tree" is dropped on a randomly chosen site and is planted if the site is unoccupied. When a cluster of "trees" spans the site (a percolating cluster), all the trees in the cluster are removed ("burned") in a "fire." The removal of the cluster is analogous to a characteristic earthquake and planting "trees" is analogous to increasing the regional stress. The clusters are analogous to the metastable regions of a fault over which an earthquake rupture can propagate once triggered. We find that the frequency-area statistics of the metastable regions are power-law with a negative exponent of two (as in the forest-fire model). This is analogous to the Gutenberg-Richter distribution of seismicity. This "self-organized critical behavior" can be explained in terms of an inverse cascade of clusters. Individual trees move from small to larger clusters until they are destroyed. This inverse cascade of clusters is self-similar and the power-law distribution of cluster sizes has been shown to have an exponent of two. We have quantified the forecasting of the spanning fires using error diagrams. The assumption that "fires" (earthquakes) are quasi-periodic has moderate predictability. The density of trees gives an improved degree of predictability, while the size of the largest cluster of trees provides a substantial improvement in forecasting a "fire."
Deformation cycles of subduction earthquakes in a viscoelastic Earth.
Wang, Kelin; Hu, Yan; He, Jiangheng
2012-04-18
Subduction zones produce the largest earthquakes. Over the past two decades, space geodesy has revolutionized our view of crustal deformation between consecutive earthquakes. The short time span of modern measurements necessitates comparative studies of subduction zones that are at different stages of the deformation cycle. Piecing together geodetic 'snapshots' from different subduction zones leads to a unifying picture in which the deformation is controlled by both the short-term (years) and long-term (decades and centuries) viscous behaviour of the mantle. Traditional views based on elastic models, such as coseismic deformation being a mirror image of interseismic deformation, are being thoroughly revised.
NASA Astrophysics Data System (ADS)
Govers, R.; Furlong, K. P.; van de Wiel, L.; Herman, M. W.; Broerse, T.
2018-03-01
Recent megathrust events in Tohoku (Japan), Maule (Chile), and Sumatra (Indonesia) were well recorded. Much has been learned about the dominant physical processes in megathrust zones: (partial) locking of the plate interface, detailed coseismic slip, relocking, afterslip, viscoelastic mantle relaxation, and interseismic loading. These and older observations show complex spatial and temporal patterns in crustal deformation and displacement, and significant differences among different margins. A key question is whether these differences reflect variations in the underlying processes, like differences in locking, or the margin geometry, or whether they are a consequence of the stage in the earthquake cycle of the margin. Quantitative models can connect these plate boundary processes to surficial and far-field observations. We use relatively simple, cyclic geodynamic models to isolate the first-order geodetic signature of the megathrust cycle. Coseismic and subsequent slip on the subduction interface is dynamically (and consistently) driven. A review of global preseismic, coseismic, and postseismic geodetic observations, and of their fit to the model predictions, indicates that similar physical processes are active at different margins. Most of the observed variability between the individual margins appears to be controlled by their different stages in the earthquake cycle. The modeling results also provide a possible explanation for observations of tensile faulting aftershocks and tensile cracking of the overriding plate, which are puzzling in the context of convergence/compression. From the inversion of our synthetic GNSS velocities we find that geodetic observations may incorrectly suggest weak locking of some margins, for example, the west Aleutian margin.
A Fluid-driven Earthquake Cycle, Omori's Law, and Fluid-driven Aftershocks
NASA Astrophysics Data System (ADS)
Miller, S. A.
2015-12-01
Few models exist that predict the Omori's Law of aftershock rate decay, with rate-state friction the only physically-based model. ETAS is a probabilistic model of cascading failures, and is sometimes used to infer rate-state frictional properties. However, the (perhaps dominant) role of fluids in the earthquake process is being increasingly realised, so a fluid-based physical model for Omori's Law should be available. In this talk, I present an hypothesis for a fluid-driven earthquake cycle where dehydration and decarbonization at depth provides continuous sources of buoyant high pressure fluids that must eventually make their way back to the surface. The natural pathway for fluid escape is along plate boundaries, where in the ductile regime high pressure fluids likely play an integral role in episodic tremor and slow slip earthquakes. At shallower levels, high pressure fluids pool at the base of seismogenic zones, with the reservoir expanding in scale through the earthquake cycle. Late in the cycle, these fluids can invade and degrade the strength of the brittle crust and contribute to earthquake nucleation. The mainshock opens permeable networks that provide escape pathways for high pressure fluids and generate aftershocks along these flow paths, while creating new pathways by the aftershocks themselves. Thermally activated precipitation then seals up these pathways, returning the system to a low-permeability environment and effective seal during the subsequent tectonic stress buildup. I find that the multiplicative effect of an exponential dependence of permeability on the effective normal stress coupled with an Arrhenius-type, thermally activated exponential reduction in permeability results in Omori's Law. I simulate this scenario using a very simple model that combines non-linear diffusion and a step-wise increase in permeability when a Mohr Coulomb failure condition is met, and allow permeability to decrease as an exponential function in time. I show very
Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki
2012-01-01
The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.
Earthquake recurrence models fail when earthquakes fail to reset the stress field
Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.
2012-01-01
Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.
NASA Astrophysics Data System (ADS)
Sun, Y.; Luo, G.
2017-12-01
Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.
Equivalent strike-slip earthquake cycles in half-space and lithosphere-asthenosphere earth models
Savage, J.C.
1990-01-01
By virtue of the images used in the dislocation solution, the deformation at the free surface produced throughout the earthquake cycle by slippage on a long strike-slip fault in an Earth model consisting of an elastic plate (lithosphere) overlying a viscoelastic half-space (asthenosphere) can be duplicated by prescribed slip on a vertical fault embedded in an elastic half-space. Inversion of 1973-1988 geodetic measurements of deformation across the segment of the San Andreas fault in the Transverse Ranges north of Los Angeles for the half-space equivalent slip distribution suggests no significant slip on the fault above 30 km and a uniform slip rate of 36 mm/yr below 30 km. One equivalent lithosphere-asthenosphere model would have a 30-km thick lithosphere and an asthenosphere relaxation time greater than 33 years, but other models are possible. -from Author
NASA Astrophysics Data System (ADS)
Wei, M.
2016-12-01
Progress towards a quantitative and predictive understanding of the earthquake behavior can be achieved by improved understanding of earthquake cycles. However, it is hindered by the long repeat times (100s to 1000s of years) of the largest earthquakes on most faults. At fast-spreading oceanic transform faults, the typical repeating time ranges from 5-20 years, making them a unique tectonic environment for studying the earthquake cycle. One important observation on OTFs is the quasi-periodicity and the spatial-temporal clustering of large earthquakes: same fault segment ruptured repeatedly at a near constant interval and nearby segments ruptured during a short time period. This has been observed on the Gofar and Discovery faults in the East Pacific Rise. Between 1992 and 2014, five clusters of M6 earthquakes occurred on the Gofar and Discovery fault system with recurrence intervals of 4-6 years. Each cluster consisted of a westward migration of seismicity from the Discovery to Gofar segment within a 2-year period, providing strong evidence for spatial-temporal clustering of large OTFs earthquakes. I simulated earthquake cycles of oceanic transform fault in the framework of rate-and-state friction, motivated by the observations at the Gofar and Discovery faults. I focus on a model with two seismic segments, each 20 km long and 5 km wide, separated by an aseismic segment of 10 km wide. This geometry is set based on aftershock locations of the 2008 M6.0 earthquake on Gofar. The repeating large earthquake on both segments are reproduced with similar magnitude as observed. I set the state parameter differently for the two seismic segments so initially they are not synchornized. Results also show that synchronization of the two seismic patches can be achieved after several earthquake cycles when the effective normal stress or the a-b parameter is smaller than surrounding aseismic areas, both having reduced the resistance to seismic rupture in the VS segment. These
Three dimensional modelling of earthquake rupture cycles on frictional faults
NASA Astrophysics Data System (ADS)
Simpson, Guy; May, Dave
2017-04-01
We are developing an efficient MPI-parallel numerical method to simulate earthquake sequences on preexisting faults embedding within a three dimensional viscoelastic half-space. We solve the velocity form of the elasto(visco)dynamic equations using a continuous Galerkin Finite Element Method on an unstructured pentahedral mesh, which thus permits local spatial refinement in the vicinity of the fault. Friction sliding is coupled to the viscoelastic solid via rate- and state-dependent friction laws using the split-node technique. Our coupled formulation employs a picard-type non-linear solver with a fully implicit, first order accurate time integrator that utilises an adaptive time step that efficiently evolves the system through multiple seismic cycles. The implementation leverages advanced parallel solvers, preconditioners and linear algebra from the Portable Extensible Toolkit for Scientific Computing (PETSc) library. The model can treat heterogeneous frictional properties and stress states on the fault and surrounding solid as well as non-planar fault geometries. Preliminary tests show that the model successfully reproduces dynamic rupture on a vertical strike-slip fault in a half-space governed by rate-state friction with the ageing law.
A Benchmarking setup for Coupled Earthquake Cycle - Dynamic Rupture - Tsunami Simulations
NASA Astrophysics Data System (ADS)
Behrens, Joern; Bader, Michael; van Dinther, Ylona; Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Uphoff, Carsten; Vater, Stefan; Wollherr, Stephanie; van Zelst, Iris
2017-04-01
We developed a simulation framework for coupled physics-based earthquake rupture generation with tsunami propagation and inundation on a simplified subduction zone system for the project "Advanced Simulation of Coupled Earthquake and Tsunami Events" (ASCETE, funded by the Volkswagen Foundation). Here, we present a benchmarking setup that can be used for complex rupture models. The workflow begins with a 2D seismo-thermo-mechanical earthquake cycle model representing long term deformation along a planar, shallowly dipping subduction zone interface. Slip instabilities that approximate earthquakes arise spontaneously along the subduction zone interface in this model. The absolute stress field and material properties for a single slip event are used as initial conditions for a dynamic earthquake rupture model.The rupture simulation is performed with SeisSol, which uses an ADER discontinuous Galerkin discretization scheme with an unstructured tetrahedral mesh. The seafloor displacements resulting from this rupture are transferred to the tsunami model with a simple coastal run-up profile. An adaptive mesh discretizing the shallow water equations with a Runge-Kutta discontinuous Galerkin (RKDG) scheme subsequently allows for an accurate and efficient representation of the tsunami evolution and inundation at the coast. This workflow allows for evaluation of how the rupture behavior affects the hydrodynamic wave propagation and coastal inundation. We present coupled results for differing earthquake scenarios. Examples include megathrust only ruptures versus ruptures with splay fault branching off the megathrust near the surface. Coupling to the tsunami simulation component is performed either dynamically (time dependent) or statically, resulting in differing tsunami wave and inundation behavior. The simplified topographical setup allows for systematic parameter studies and reproducible physical studies.
Time-Varying Upper-Plate Deformation during the Megathrust Subduction Earthquake Cycle
NASA Astrophysics Data System (ADS)
Furlong, Kevin P.; Govers, Rob; Herman, Matthew
2015-04-01
Over the past several decades of the WEGENER era, our abilities to observe and image the deformational behavior of the upper plate in megathrust subduction zones has dramatically improved. Several intriguing inferences can be made from these observations including apparent lateral variations in locking along subduction zones, which differs from interseismic to coseismic periods; the significant magnitude of post-earthquake deformation (e.g. following the 20U14 Mw Iquique, Chile earthquake, observed on-land GPS post-EQ displacements are comparable to the co-seismic displacements); and incompatibilities between rates of slip deficit accumulation and resulting earthquake co-seismic slip (e.g. pre-Tohoku, inferred rates of slip deficit accumulation on the megathrust significantly exceed slip amounts for the ~ 1000 year recurrence.) Modeling capabilities have grown from fitting simple elastic accumulation/rebound curves to sparse data to having spatially dense continuous time series that allow us to infer details of plate boundary coupling, rheology-driven transient deformation, and partitioning among inter-earthquake and co-seismic displacements. In this research we utilize a 2D numerical modeling to explore the time-varying deformational behavior of subduction zones during the earthquake cycle with an emphasis on upper-plate and plate interface behavior. We have used a simplified model configuration to isolate fundamental processes associated with the earthquake cycle, rather than attempting to fit details of specific megathrust zones. Using a simple subduction geometry, but realistic rheologic layering we are evaluating the time-varying displacement and stress response through a multi-earthquake cycle history. We use a simple model configuration - an elastic subducting slab, an elastic upper plate (shallower than 40 km), and a visco-elastic upper plate (deeper than 40 km). This configuration leads to an upper plate that acts as a deforming elastic beam at inter-earthquake
Crustal deformation, the earthquake cycle, and models of viscoelastic flow in the asthenosphere
NASA Technical Reports Server (NTRS)
Cohen, S. C.; Kramer, M. J.
1983-01-01
The crustal deformation patterns associated with the earthquake cycle can depend strongly on the rheological properties of subcrustal material. Substantial deviations from the simple patterns for a uniformly elastic earth are expected when viscoelastic flow of subcrustal material is considered. The detailed description of the deformation pattern and in particular the surface displacements, displacement rates, strains, and strain rates depend on the structure and geometry of the material near the seismogenic zone. The origin of some of these differences are resolved by analyzing several different linear viscoelastic models with a common finite element computational technique. The models involve strike-slip faulting and include a thin channel asthenosphere model, a model with a varying thickness lithosphere, and a model with a viscoelastic inclusion below the brittle slip plane. The calculations reveal that the surface deformation pattern is most sensitive to the rheology of the material that lies below the slip plane in a volume whose extent is a few times the fault depth. If this material is viscoelastic, the surface deformation pattern resembles that of an elastic layer lying over a viscoelastic half-space. When the thickness or breath of the viscoelastic material is less than a few times the fault depth, then the surface deformation pattern is altered and geodetic measurements are potentially useful for studying the details of subsurface geometry and structure. Distinguishing among the various models is best accomplished by making geodetic measurements not only near the fault but out to distances equal to several times the fault depth. This is where the model differences are greatest; these differences will be most readily detected shortly after an earthquake when viscoelastic effects are most pronounced.
Tremor, remote triggering and earthquake cycle
NASA Astrophysics Data System (ADS)
Peng, Z.
2012-12-01
Deep tectonic tremor and episodic slow-slip events have been observed at major plate-boundary faults around the Pacific Rim. These events have much longer source durations than regular earthquakes, and are generally located near or below the seismogenic zone where regular earthquakes occur. Tremor and slow-slip events appear to be extremely stress sensitive, and could be instantaneously triggered by distant earthquakes and solid earth tides. However, many important questions remain open. For example, it is still not clear what are the necessary conditions for tremor generation, and how remote triggering could affect large earthquake cycle. Here I report a global search of tremor triggered by recent large teleseismic earthquakes. We mainly focus on major subduction zones around the Pacific Rim. These include the southwest and northeast Japan subduction zones, the Hikurangi subduction zone in New Zealand, the Cascadia subduction zone, and the major subduction zones in Central and South America. In addition, we examine major strike-slip faults around the Caribbean plate, the Queen Charlotte fault in northern Pacific Northwest Coast, and the San Andreas fault system in California. In each place, we first identify triggered tremor as a high-frequency non-impulsive signal that is in phase with the large-amplitude teleseismic waves. We also calculate the dynamic stress and check the triggering relationship with the Love and Rayleigh waves. Finally, we calculate the triggering potential with the local fault orientation and surface-wave incident angles. Our results suggest that tremor exists at many plate-boundary faults in different tectonic environments, and could be triggered by dynamic stress as low as a few kPas. In addition, we summarize recent observations of slow-slip events and earthquake swarms triggered by large distant earthquakes. Finally, we propose several mechanisms that could explain apparent clustering of large earthquakes around the world.
Murray, J.; Langbein, J.
2006-01-01
Parkfield, California, which experienced M 6.0 earthquakes in 1934, 1966, and 2004, is one of the few locales for which geodetic observations span multiple earthquake cycles. We undertake a comprehensive study of deformation over the most recent earthquake cycle and explore the results in the context of geodetic data collected prior to the 1966 event. Through joint inversion of the variety of Parkfield geodetic measurements (trilateration, two-color laser, and Global Positioning System), including previously unpublished two-color data, we estimate the spatial distribution of slip and slip rate along the San Andreas using a fault geometry based on precisely relocated seismicity. Although the three most recent Parkfield earthquakes appear complementary in their along-strike distributions of slip, they do not produce uniform strain release along strike over multiple seismic cycles. Since the 1934 earthquake, more than 1 m of slip deficit has accumulated on portions of the fault that slipped in the 1966 and 2004 earthquakes, and an average of 2 m of slip deficit exists on the 33 km of the fault southeast of Gold Hill to be released in a future, perhaps larger, earthquake. It appears that the fault is capable of partially releasing stored strain in moderate earthquakes, maintaining a disequilibrium through multiple earthquake cycles. This complicates the application of simple earthquake recurrence models that assume only the strain accumulated since the most recent event is relevant to the size or timing of an upcoming earthquake. Our findings further emphasize that accumulated slip deficit is not sufficient for earthquake nucleation.
NASA Astrophysics Data System (ADS)
Yamasaki, T.; Wright, T. J.; Houseman, G. A.
2013-12-01
After large earthquakes, rapid postseismic transient motions are commonly observed. Later in the loading cycle, strain is typically focused in narrow regions around the fault. In simple two-layer models of the loading cycle for strike-slip faults, rapid post-seismic transients require low viscosities beneath the elastic layer, but localized strain later in the cycle implies high viscosities in the crust. To explain this apparent paradox, complex transient rheologies have been invoked. Here we test an alternative hypothesis in which spatial variations in material properties of the crust can explain the geodetic observations. We use a 3D viscoelastic finite element code to examine two simple models of periodic fault slip: a stratified model in which crustal viscosity decreases exponentially with depth below an upper elastic layer, and a block model in which a low viscosity domain centered beneath the fault is embedded in a higher viscosity background representing normal crust. We test these models using GPS data acquired before and after the 1999 Izmit/Duzce earthquakes on the North Anatolian Fault Zone (Turkey). The model with depth-dependent viscosity can show both high postseismic velocities, and preseismic localization of the deformation, if the viscosity contrast from top to bottom of layer exceeds a factor of about 104. However, with no lateral variations in viscosity, this model cannot explain the proximity to the fault of maximum postseismic velocities. In contrast, the model which includes a localized weak zone beneath the faulted elastic lid can explain all the observations, if the weak zone extends down to mid-crustal levels and outward to 10 or 20 km from the fault. The non-dimensional ratio of relaxation time to earthquake repeat time, τ/Δt, is the critical parameter in controlling the observed deformation. In the weak-zone model, τ/Δt should be in the range 0.005 to 0.01 in the weak domain, and larger than ~ 1.0 elsewhere. This implies a viscosity
NASA Astrophysics Data System (ADS)
Fleitout, L.; Trubienko, O.; Garaud, J.; Vigny, C.; Cailletaud, G.; Simons, W. J.; Satirapod, C.; Shestakov, N.
2012-12-01
A 3D finite element code (Zebulon-Zset) is used to model deformations through the seismic cycle in the areas surrounding the last three large subduction earthquakes: Sumatra, Japan and Chile. The mesh featuring a broad spherical shell portion with a viscoelastic asthenosphere is refined close to the subduction zones. The model is constrained by 6 years of postseismic data in Sumatra area and over a year of data for Japan and Chile plus preseismic data in the three areas. The coseismic displacements on the subduction plane are inverted from the coseismic displacements using the finite element program and provide the initial stresses. The predicted horizontal postseismic displacements depend upon the thicknesses of the elastic plate and of the low viscosity asthenosphere. Non-dimensionalized by the coseismic displacements, they present an almost uniform value between 500km and 1500km from the trench for elastic plates 80km thick. The time evolution of the velocities is function of the creep law (Maxwell, Burger or power-law creep). Moreover, the forward models predict a sizable far-field subsidence, also with a spatial distribution which varies with the geometry of the asthenosphere and lithosphere. Slip on the subduction interface does not induce such a subsidence. The observed horizontal velocities, divided by the coseismic displacement, present a similar pattern as function of time and distance from trench for the three areas, indicative of similar lithospheric and asthenospheric thicknesses and asthenospheric viscosity. This pattern cannot be fitted with power-law creep in the asthenosphere but indicates a lithosphere 60 to 90km thick and an asthenosphere of thickness of the order of 100km with a burger rheology represented by a Kelvin-Voigt element with a viscosity of 3.1018Pas and μKelvin=μelastic/3. A second Kelvin-Voigt element with very limited amplitude may explain some characteristics of the short time-scale signal. The postseismic subsidence is
NASA Astrophysics Data System (ADS)
Sobolev, Stephan; Muldashev, Iskander
2016-04-01
According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.
NASA Astrophysics Data System (ADS)
Rolandone, F.; Bürgmann, R.; Nadeau, R.; Freed, A.
2003-12-01
We have demonstrated that in the aftermath of large earthquakes, the depth extent of aftershocks shows an immediate deepening from pre-earthquake levels, followed by a time-dependent postseismic shallowing. We use these seismic data to constrain the variation of the depth of the seismic-aseismic transition with time throughout the earthquake cycle. Most studies of the seismic-aseismic transition have focussed on the effect of temperature and/or lithology on the transition either from brittle faulting to viscous flow or from unstable to stable sliding. They have shown that the maximum depth of seismic activity is well correlated with the spatial variations of these two parameters. However, little has been done to examine how the maximum depth of seismogenic faulting varies locally, at the scale of a fault segment, during the course of the earthquake cycle. Geologic and laboratory observations indicate that the depth of the seismic-aseismic transition should vary with strain rate and thus change with time throughout the earthquake cycle. We quantify the time-dependent variations in the depth of seismicity on various strike-slip faults in California before and after large earthquakes. We specifically investigate (1) the deepening of the aftershocks relative to the background seismicity, (2) the time constant of the postseismic shallowing of the deepest earthquakes, and (3) the correlation of the time-dependent pattern with the coseismic slip distribution and the expected stress increase. Together with geodetic measurements, these seismological observations form the basis for developing more sophisticated models for the mechanical evolution of strike-slip shear zones during the earthquake cycle. We develop non-linear viscoelastic models, for which the brittle-ductile transition is not fixed, but varies with assumed temperature and calculated stress gradients. We use them to place constraints on strain rate at depth, on time-dependent rheology, and on the partitioning
NASA Astrophysics Data System (ADS)
Trubienko, Olga; Fleitout, Luce; Garaud, Jean-Didier; Vigny, Christophe
2013-03-01
The deformations of the overriding and subducting plates during the seismic cycle associated with large subduction earthquakes are modelled using 2D and 3D finite element techniques. A particular emphasis is put on the interseismic velocities and on the impact of the rheology of the asthenosphere. The distance over which the seismic cycle perturbs significantly the velocities depends upon the ratio of the viscosity in the asthenosphere to the period of the seismic cycle and can reach several thousand km for rheological parameters deduced from the first years of deformation after the Aceh earthquake. For a same early postseismic velocity, a Burger rheology of the asthenosphere implies a smaller duration of the postseismic phase and thus smaller interseismic velocities than a Maxwell rheology. A low viscosity wedge (LVW) modifies very significantly the predicted horizontal and vertical motions in the near and middle fields. In particular, with a LVW, the peak in vertical velocity at the end of the cycle is predicted to be no longer above the deep end of the locked section of the fault but further away, above the continentward limit of the LVW. The lateral viscosity variations linked to the presence at depth of the subducting slab affect substantially the results. The north-south interseismic compression predicted by this preliminary 2D model over more than 1500 km within the Sunda block is in good agreement with the pre-2004 velocities with respect to South-China inferred from GPS observations in Thailand, Malaysia and Indonesia. In Japan, before the Tohoku earthquake, the eastern part of northern Honshu was subsiding while the western part was uplifting. This transition from subsidence to uplift so far away from the trench is well fitted by the predictions from our models involving a LVW. Most of the results obtained here in a 2D geometry are shown to provide a good estimate of the displacements for fault segments of finite lateral extent, with a 3D spherical
Geodetic Insights into the Earthquake Cycle in a Fold and Thrust Belt
NASA Astrophysics Data System (ADS)
Ingleby, T. F.; Wright, T. J.; Butterworth, V.; Weiss, J. R.; Elliott, J.
2017-12-01
Geodetic measurements are often sparse in time (e.g. individual interferograms) and/or space (e.g. GNSS stations), adversely affecting our ability to capture the spatiotemporal detail required to study the earthquake cycle in complex tectonic systems such as subaerial fold and thrust belts. In an effort to overcome these limitations we combine 3 generations of SAR satellite data (ERS 1/2, Envisat & Sentinel-1a/b) to obtain a 25 year, high-resolution surface displacement time series over the frontal portion of an active fold and thrust belt near Quetta, Pakistan where a Mw 7.1 earthquake doublet occurred in 1997. With these data we capture a significant portion of the seismic cycle including the interseismic, coseismic and postseismic phases. Each satellite time series has been referenced to the first ERS-1 SAR epoch by fitting a ground deformation model to the data. This allows us to separate deformation associated with each phase and to examine their relative roles in accommodating strain and creating topography, and to explore the relationship between the earthquake cycle and critical taper wedge mechanics. Modeling of the coseismic deformation suggests a long, thin rupture with rupture length 7 times greater than rupture width. Rupture was confined to a 20-30 degree north-northeast dipping reverse fault or ramp at depth, which may be connecting two weak decollements at approximately 8 km and 13 km depth. Alternatively, intersections between the coseismic fault plane and pre-existing steeper splay faults underlying folds may have played a significant role in inhibiting rupture, as evidenced by intersection points bordering the rupture. These fault intersections effectively partition the fault system down-dip and enable long, thin ruptures. Postseismic deformation is manifest as uplift across short-wavelength folds at the thrust front, with displacement rates decreasing with time since the earthquake. Broader patterns of postseismic uplift are also observed
NASA Astrophysics Data System (ADS)
Allison, K. L.; Dunham, E. M.
2017-12-01
We simulate earthquake cycles on a 2D strike-slip fault, modeling both rate-and-state fault friction and an off-fault nonlinear power-law rheology. The power-law rheology involves an effective viscosity that is a function of temperature and stress, and therefore varies both spatially and temporally. All phases of the earthquake cycle are simulated, allowing the model to spontaneously generate earthquakes, and to capture frictional afterslip and postseismic and interseismic viscous flow. We investigate the interaction between fault slip and bulk viscous flow, using experimentally-based flow laws for quartz-diorite in the crust and olivine in the mantle, representative of the Mojave Desert region in Southern California. We first consider a suite of three linear geotherms which are constant in time, with dT/dz = 20, 25, and 30 K/km. Though the simulations produce very different deformation styles in the lower crust, ranging from significant interseismc fault creep to purely bulk viscous flow, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. This indicates that bulk viscous flow and interseismic fault creep load the brittle crust similarly. The simulations also predict unrealistically high stresses in the upper crust, resulting from the fact that the lower crust and upper mantle are relatively weak far from the fault, and from the relatively small role that basal tractions on the base of the crust play in the force balance of the lithosphere. We also find that for the warmest model, the effective viscosity varies by an order of magnitude in the interseismic period, whereas for the cooler models it remains roughly constant. Because the rheology is highly sensitive to changes in temperature, in addition to the simulations with constant temperature we also consider the effect of heat generation. We capture both frictional heat generation and off-fault viscous shear heating, allowing these in turn to alter the
NASA Astrophysics Data System (ADS)
Tao, Wei; Shen, Zheng-Kang; Zhang, Yong
2016-04-01
The Longmen Shan, located in the conjunction of the eastern margin the Tibet plateau and Sichuan basin, is a typical area for studying the deformation pattern of the Tibet plateau. Following the 2008 Mw 7.9 Wenchuan earthquake (WE) rupturing the Longmen Shan Fault (LSF), a great deal of observations and studies on geology, geophysics, and geodesy have been carried out for this region, with results published successively in recent years. Using the 2D viscoelastic finite element model, introducing the rate-state friction law to the fault, this thesis makes modeling of the earthquake recurrence process and the dynamic evolutionary processes in an earthquake cycle of 10 thousand years. By analyzing the displacement, velocity, stresses, strain energy and strain energy increment fields, this work obtains the following conclusions: (1) The maximum coseismic displacement on the fault is on the surface, and the damage on the hanging wall is much more serious than that on the foot wall of the fault. If the detachment layer is absent, the coseismic displacement would be smaller and the relative displacement between the hanging wall and foot wall would also be smaller. (2) In every stage of the earthquake cycle, the velocities (especially the vertical velocities) on the hanging wall of the fault are larger than that on the food wall, and the values and the distribution patterns of the velocity fields are similar. While in the locking stage prior to the earthquake, the velocities in crust and the relative velocities between hanging wall and foot wall decrease. For the model without the detachment layer, the velocities in crust in the post-seismic stage is much larger than those in other stages. (3) The maximum principle stress and the maximum shear stress concentrate around the joint of the fault and detachment layer, therefore the earthquake would nucleate and start here. (4) The strain density distribution patterns in stages of the earthquake cycle are similar. There are two
Influence of the Wenchuan earthquake on self-reported irregular menstrual cycles in surviving women.
Li, Xiao-Hong; Qin, Lang; Hu, Han; Luo, Shan; Li, Lei; Fan, Wei; Xiao, Zhun; Li, Ying-Xing; Li, Shang-Wei
2011-09-01
To explore the influence of stress induced by the Wenchuan earthquake on the menstrual cycles of surviving women. Self-reports of the menstrual cycles of 473 women that survived the Wenchuan earthquake were analyzed. Menstrual regularity was defined as menses between 21 and 35 days long. The death of a child or the loss of property and social resources was verified for all surviving women. The severity of these losses was assessed and graded as high, little, and none. About 21% of the study participants reported that their menstrual cycles became irregular after the Wenchuan earthquake, and this percentage was significantly higher than before the earthquake (6%, p < 0.05). About 30% of the surviving women with a high degree of loss in the earthquake reported menstrual irregularity after the earthquake. Association analyses showed that some stressors of the Wenchuan earthquake were strongly associated with self-reports of menstrual irregularity, including the loss of children (RR: 1.58; 95% CI: 1.09, 2.28), large amounts of property (RR: 1.49; 95% CI: 1.03, 2.15), social resources (RR: 1.34; 95% CI: 1.00, 1.80) and the hormonal contraception use (RR: 1.62; 95% CI: 1.21, 1.83). Self-reported menstrual irregularity is common in women that survived the Wenchuan earthquake, especially in those who lost children, large amounts of property and social resources.
Characteristics of strong ground motion generation areas by fully dynamic earthquake cycles
NASA Astrophysics Data System (ADS)
Galvez, P.; Somerville, P.; Ampuero, J. P.; Petukhin, A.; Yindi, L.
2016-12-01
During recent subduction zone earthquakes (2010 Mw 8.8 Maule and 2011 Mw 9.0 Tohoku), high frequency ground motion radiation has been detected in deep regions of seismogenic zones. By semblance analysis of wave packets, Kurahashi & Irikura (2013) found strong ground motion generation areas (SMGAs) located in the down dip region of the 2011 Tohoku rupture. To reproduce the rupture sequence of SMGA's and replicate their rupture time and ground motions, we extended previous work on dynamic rupture simulations with slip reactivation (Galvez et al, 2016). We adjusted stresses on the most southern SMGAs of Kurahashi & Irikura (2013) model to reproduce the observed peak ground velocity recorded at seismic stations along Japan for periods up to 5 seconds. To generate higher frequency ground motions we input the rupture time, final slip and slip velocity of the dynamic model into the stochastic ground motion generator of Graves & Pitarka (2010). Our results are in agreement with the ground motions recorded at the KiK-net and K-NET stations.While we reproduced the recorded ground motions of the 2011 Tohoku event, it is unknown whether the characteristics and location of SMGA's will persist in future large earthquakes in this region. Although the SMGA's have large peak slip velocities, the areas of largest final slip are located elsewhere. To elucidate whether this anti-correlation persists in time, we conducted earthquake cycle simulations and analysed the spatial correlation of peak slip velocities, stress drops and final slip of main events. We also investigated whether or not the SMGA's migrate to other regions of the seismic zone.To perform this study, we coupled the quasi-dynamic boundary element solver QDYN (Luo & Ampuero, 2015) and the dynamic spectral element solver SPECFEM3D (Galvez et al., 2014; 2016). The workflow alternates between inter-seismic periods solved with QDYN and coseismic periods solved with SPECFEM3D, with automated switch based on slip rate
NASA Astrophysics Data System (ADS)
Ruff, Larry J.
2001-04-01
The deep creep plate interface extends from the down-dip edge of the seismogenic zone down to the base of the overlying lithosphere in subduction zones. Seismogenic/deep creep zone interaction during the earthquake cycle produces spatial and temporal variations in strains within the surrounding elastic material. Strain observations in the Nankai subduction zone show distinct deformation styles in the co-seismic, post-seismic, and inter-seismic phases associated with the 1946 great earthquake. The most widely used kinematic model to match geodetic observations has been a 2-D Savage-type model where a plate interface is placed in an elastic half-space and co-seismic slip occurs in the upper seismogenic portion of the interface, while inter-seismic deformation is modeled by a locked seismogenic zone and a constant slip velocity across the deep creep interface. Here, I use the simplest possible 2-D mechanical model with just two blocks to study the stress interaction between the seismogenic and deep creep zones. The seismogenic zone behaves as a stick-slip interface where co-seismic slip or stress drop constrain the model. A linear constitutive law for the deep creep zone connects the shear stress (σ) to the slip velocity across the plate interface (s') with the material property of interface viscosity (ζ ) as: σ = ζ s'. The analytic solution for the steady-state two-block model produces simple formulas that connect some spatially-averaged geodetic observations to model quantities. Aside from the basic subduction zone geometry, the key observed parameter is τ, the characteristic time of the rapid post-seismic slip in the deep creep interface. Observations of τ range from about 5 years (Nankai and Alaska) to 15 years (Chile). The simple model uses these values for τ to produce estimates for ζ that range from 8.4 × 1013 Pa/m/s (in Nankai) to 6.5 × 1014 Pa/m/s (in Chile). Then, the model predicts that the shear stress acting on deep creep interface averaged over
NASA Astrophysics Data System (ADS)
Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.
2017-12-01
A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would
Earthquake Cycle Simulations with Rate-and-State Friction and Linear and Nonlinear Viscoelasticity
NASA Astrophysics Data System (ADS)
Allison, K. L.; Dunham, E. M.
2016-12-01
We have implemented a parallel code that simultaneously models both rate-and-state friction on a strike-slip fault and off-fault viscoelastic deformation throughout the earthquake cycle in 2D. Because we allow fault slip to evolve with a rate-and-state friction law and do not impose the depth of the brittle-to-ductile transition, we are able to address: the physical processes limiting the depth of large ruptures (with hazard implications); the degree of strain localization with depth; the relative partitioning of fault slip and viscous deformation in the brittle-to-ductile transition zone; and the relative contributions of afterslip and viscous flow to postseismic surface deformation. The method uses a discretization that accommodates variable off-fault material properties, depth-dependent frictional properties, and linear and nonlinear viscoelastic rheologies. All phases of the earthquake cycle are modeled, allowing the model to spontaneously generate earthquakes, and to capture afterslip and postseismic viscous flow. We compare the effects of a linear Maxwell rheology, often used in geodetic models, with those of a nonlinear power law rheology, which laboratory data indicates more accurately represents the lower crust and upper mantle. The viscosity of the Maxwell rheology is set by power law rheological parameters with an assumed a geotherm and strain rate, producing a viscosity that exponentially decays with depth and is constant in time. In contrast, the power law rheology will evolve an effective viscosity that is a function of the temperature profile and the stress state, and therefore varies both spatially and temporally. We will also integrate the energy equation for the thermomechanical problem, capturing frictional heat generation on the fault and off-fault viscous shear heating, and allowing these in turn to alter the effective viscosity.
The earthquake cycle in the San Francisco Bay region: A.D. 1600–2012
Schwartz, David P.; Lienkaemper, James J.; Hecker, Suzanne; Kelson, Keith I.; Fumal, Thomas E.; Baldwin, John N.; Seitz, Gordon G.; Niemi, Tina
2014-01-01
Stress changes produced by the 1906 San Francisco earthquake had a profound effect on the seismicity of the San Francisco Bay region (SFBR), dramatically reducing it in the twentieth century. Whether the SFBR is still within or has emerged from this seismic quiescence is an issue of debate with implications for earthquake mechanics and seismic hazards. Historically, the SFBR has not experienced one complete earthquake cycle (i.e., the accumulation of stress, its release primarily as coseismic slip during surface‐faulting earthquakes, its re‐accumulation in the interval following, and its subsequent rerelease). The historical record of earthquake occurrence in the SFBR appears to be complete at about M 5.5 back to 1850 (Bakun, 1999). For large events, the record may be complete back to 1776, which represents about half a cycle. Paleoseismic data provide a more complete view of the most recent pre‐1906 SFBR earthquake cycle, extending it back to about 1600. Using these, we have developed estimates of magnitude and seismic moment for alternative sequences of surface‐faulting paleoearthquakes occurring between 1600 and 1776 on the region’s major faults. From these we calculate seismic moment and moment release rates for different time intervals between 1600 and 2012. These show the variability in moment release and suggest that, in the SFBR regional plate boundary, stress can be released on a single fault in great earthquakes such as that in 1906 and in multiple ruptures distributed on the regional plate boundary fault system on a decadal time scale.
Earthquake Clustering on Normal Faults: Insight from Rate-and-State Friction Models
NASA Astrophysics Data System (ADS)
Biemiller, J.; Lavier, L. L.; Wallace, L.
2016-12-01
Temporal variations in slip rate on normal faults have been recognized in Hawaii and the Basin and Range. The recurrence intervals of these slip transients range from 2 years on the flanks of Kilauea, Hawaii to 10 kyr timescale earthquake clustering on the Wasatch Fault in the eastern Basin and Range. In addition to these longer recurrence transients in the Basin and Range, recent GPS results there also suggest elevated deformation rate events with recurrence intervals of 2-4 years. These observations suggest that some active normal fault systems are dominated by slip behaviors that fall between the end-members of steady aseismic creep and periodic, purely elastic, seismic-cycle deformation. Recent studies propose that 200 year to 50 kyr timescale supercycles may control the magnitude, timing, and frequency of seismic-cycle earthquakes in subduction zones, where aseismic slip transients are known to play an important role in total deformation. Seismic cycle deformation of normal faults may be similarly influenced by its timing within long-period supercycles. We present numerical models (based on rate-and-state friction) of normal faults such as the Wasatch Fault showing that realistic rate-and-state parameter distributions along an extensional fault zone can give rise to earthquake clusters separated by 500 yr - 5 kyr periods of aseismic slip transients on some portions of the fault. The recurrence intervals of events within each earthquake cluster range from 200 to 400 years. Our results support the importance of stress and strain history as controls on a normal fault's present and future slip behavior and on the characteristics of its current seismic cycle. These models suggest that long- to medium-term fault slip history may influence the temporal distribution, recurrence interval, and earthquake magnitudes for a given normal fault segment.
Viscoelastic Earthquake Cycle Simulation with Memory Variable Method
NASA Astrophysics Data System (ADS)
Hirahara, K.; Ohtani, M.
2017-12-01
There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half
Earthquake models using rate and state friction and fast multipoles
NASA Astrophysics Data System (ADS)
Tullis, T.
2003-04-01
The most realistic current earthquake models employ laboratory-derived non-linear constitutive laws. These are the rate and state friction laws having both a non-linear viscous or direct effect and an evolution effect in which frictional resistance depends on time of stationary contact and has a memory of past slip velocity that fades with slip. The frictional resistance depends on the log of the slip velocity as well as the log of stationary hold time, and the fading memory involves an approximately exponential decay with slip. Due to the nonlinearly of these laws, analytical earthquake models are not attainable and numerical models are needed. The situation is even more difficult if true dynamic models are sought that deal with inertial forces and slip velocities on the order of 1 m/s as are observed during dynamic earthquake slip. Additional difficulties that exist if the dynamic slip phase of earthquakes is modeled arise from two sources. First, many physical processes might operate during dynamic slip, but they are only poorly understood, the relative importance of the processes is unknown, and the processes are even more nonlinear than those described by the current rate and state laws. Constitutive laws describing such behaviors are still being developed. Second, treatment of inertial forces and the influence that dynamic stresses from elastic waves may have on slip on the fault requires keeping track of the history of slip on remote parts of the fault as far into the past as it takes waves to travel from there. This places even more stringent requirements on computer time. Challenges for numerical modeling of complete earthquake cycles are that both time steps and mesh sizes must be small. Time steps must be milliseconds during dynamic slip, and yet models must represent earthquake cycles 100 years or more in length; methods using adaptive step sizes are essential. Element dimensions need to be on the order of meters, both to approximate continuum behavior
NASA Astrophysics Data System (ADS)
Philibosian, B.; Meltzner, A. J.; Sieh, K.
2017-12-01
Understanding earthquake cycle processes is key to both seismic hazard and fault mechanics. A concept that has come into focus recently is that rupture segmentation and cyclicity can be complex, and that simple models of periodically repeating similar earthquakes are inadequate. The term "supercycle" has been used to describe repeating longer periods of strain accumulation that involve multiple fault ruptures. However, this term has become broadly applied, lumping together several distinct phenomena that likely have disparate underlying causes. Earthquake recurrence patterns have often been described as "clustered," but this term is also imprecise. It is necessary to develop a terminology framework that consistently and meaningfully describes all types of behavior that are observed. We divide earthquake cycle patterns into four major classes, each having different implications for seismic hazard and fault mechanics: 1) quasi-periodic similar ruptures, 2) temporally clustered similar ruptures, 3) temporally clustered complementary ruptures, also known as rupture cascades, in which neighboring fault patches fail sequentially, and 4) superimposed cycles in which neighboring fault patches have cycles with different recurrence intervals, but may occasionally rupture together. Rupture segmentation is classified as persistent, frequent, or transient depending on how reliably ruptures terminate in a given area. We discuss the paleoseismic and historical evidence currently available for each of these types of behavior on subduction zone megathrust faults worldwide. Due to the unique level of paleoseismic and paleogeodetic detail provided by the coral microatoll technique, the Sumatran Sunda megathrust provides one of the most complete records over multiple seismic cycles. Most subduction zones with sufficient data exhibit examples of persistent and frequent segmentation, with cycle patterns 1, 3, and 4 on different segments. Pattern 2 is generally confined to overlap zones
Earthquake Potential Models for China
NASA Astrophysics Data System (ADS)
Rong, Y.; Jackson, D. D.
2002-12-01
We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may
Wrightwood and the earthquake cycle: What a long recurrence record tells us about how faults work
Weldon, R.; Scharer, K.; Fumal, T.; Biasi, G.
2004-01-01
The concept of the earthquake cycle is so well established that one often hears statements in the popular media like, "the Big One is overdue" and "the longer it waits, the bigger it will be." Surprisingly, data to critically test the variability in recurrence intervals, rupture displacements, and relationships between the two are almost nonexistent. To generate a long series of earthquake intervals and offsets, we have conducted paleoseismic investigations across the San Andreas fault near the town of Wrightwood, California, excavating 45 trenches over 18 years, and can now provide some answers to basic questions about recurrence behavior of large earthquakes. To date, we have characterized at least 30 prehistoric earthquakes in a 6000-yr-long record, complete for the past 1500 yr and for the interval 3000-1500 B.C. For the past 1500 yr, the mean recurrence interval is 105 yr (31-165 yr for individual intervals) and the mean slip is 3.2 m (0.7-7 m per event). The series is slightly more ordered than random and has a notable cluster of events, during which strain was released at 3 times the long-term average rate. Slip associated with an earthquake is not well predicted by the interval preceding it, and only the largest two earthquakes appear to affect the time interval to the next earthquake. Generally, short intervals tend to coincide with large displacements and long intervals with small displacements. The most significant correlation we find is that earthquakes are more frequent following periods of net strain accumulation spanning multiple seismic cycles. The extent of paleoearthquake ruptures may be inferred by correlating event ages between different sites along the San Andreas fault. Wrightwood and other nearby sites experience rupture that could be attributed to overlap of relatively independent segments that each behave in a more regular manner. However, the data are equally consistent with a model in which the irregular behavior seen at Wrightwood
Quasi-dynamic Earthquake Cycle Simulation in a Viscoelastic Medium with Memory Variables
NASA Astrophysics Data System (ADS)
Hirahara, K.; Ohtani, M.; Shikakura, Y.
2011-12-01
Earthquake cycle simulations based on rate and state friction laws have successfully reproduced the observed complex earthquake cycles at subduction zones. Most of simulations have assumed elastic media. The lower crust and the upper mantle have, however, viscoelastic properties, which cause postseismic stress relaxation. Hence the slip evolution on the plate interfaces or the faults in long earthquake cycles is different from that in elastic media. Especially, the viscoelasticity plays an important role in the interactive occurrence of inland and great interplate earthquakes. In viscoelastic media, the stress is usually calculated by the temporal convolution of the slip response function matrix and the slip deficit rate vector, which needs the past history of slip rates at all cells. Even if properly truncating the convolution, it requires huge computations. This is why few simulation studies have considered viscoelastic media so far. In this study, we examine the method using memory variables or anelastic functions, which has been developed for the time-domain finite-difference calculation of seismic waves in a dissipative medium (e.g., Emmerich and Korn,1987; Moczo and Kristek, 2005). The procedure for stress calculation with memory variables is as follows. First, we approximate the time-domain slip response function calculated in a viscoelastic medium with a series of relaxation functions with coefficients and relaxation times derived from a generalized Maxell body model. Then we can define the time-domain material-independent memory variable or anelastic function for each relaxation mechanism. Each time-domain memory variable satisfies the first-order differential equation. As a result, we can calculate the stress simply by the product of the unrelaxed modulus and the slip deficit subtracted from the sum of memory variables without temporal convolution. With respect to computational cost, we can summarize as in the followings. Dividing the plate interface into
Forecast model for great earthquakes at the Nankai Trough subduction zone
Stuart, W.D.
1988-01-01
An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.
Uchida, Naoki; Matsuzawa, Toru; Ellsworth, William L.; Imanishi, Kazutoshi; Shimamura, Kouhei; Hasegawa, Akira
2012-01-01
We have estimated the source parameters of interplate earthquakes in an earthquake cluster off Kamaishi, NE Japan over two cycles of M~ 4.9 repeating earthquakes. The M~ 4.9 earthquake sequence is composed of nine events that occurred since 1957 which have a strong periodicity (5.5 ± 0.7 yr) and constant size (M4.9 ± 0.2), probably due to stable sliding around the source area (asperity). Using P- and S-wave traveltime differentials estimated from waveform cross-spectra, three M~ 4.9 main shocks and 50 accompanying microearthquakes (M1.5–3.6) from 1995 to 2008 were precisely relocated. The source sizes, stress drops and slip amounts for earthquakes of M2.4 or larger were also estimated from corner frequencies and seismic moments using simultaneous inversion of stacked spectral ratios. Relocation using the double-difference method shows that the slip area of the 2008 M~ 4.9 main shock is co-located with those of the 1995 and 2001 M~ 4.9 main shocks. Four groups of microearthquake clusters are located in and around the mainshock slip areas. Of these, two clusters are located at the deeper and shallower edge of the slip areas and most of these microearthquakes occurred repeatedly in the interseismic period. Two other clusters located near the centre of the mainshock source areas are not as active as the clusters near the edge. The occurrence of these earthquakes is limited to the latter half of the earthquake cycles of the M~ 4.9 main shock. Similar spatial and temporal features of microearthquake occurrence were seen for two other cycles before the 1995 M5.0 and 1990 M5.0 main shocks based on group identification by waveform similarities. Stress drops of microearthquakes are 3–11 MPa and are relatively constant within each group during the two earthquake cycles. The 2001 and 2008 M~ 4.9 earthquakes have larger stress drops of 41 and 27 MPa, respectively. These results show that the stress drop is probably determined by the fault properties and does not change
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a
Are Earthquake Clusters/Supercycles Real or Random?
NASA Astrophysics Data System (ADS)
Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.
2016-12-01
Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so
Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.
2017-12-01
To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number
NASA Astrophysics Data System (ADS)
Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.
2012-02-01
The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.
Slow slip events in the early part of the earthquake cycle
NASA Astrophysics Data System (ADS)
Voss, Nicholas K.; Malservisi, Rocco; Dixon, Timothy H.; Protti, Marino
2017-08-01
In February 2014 a
Parallelization of the Coupled Earthquake Model
NASA Technical Reports Server (NTRS)
Block, Gary; Li, P. Peggy; Song, Yuhe T.
2007-01-01
This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.
Laboratory constraints on models of earthquake recurrence
NASA Astrophysics Data System (ADS)
Beeler, N. M.; Tullis, Terry; Junger, Jenni; Kilgore, Brian; Goldsby, David
2014-12-01
In this study, rock friction "stick-slip" experiments are used to develop constraints on models of earthquake recurrence. Constant rate loading of bare rock surfaces in high-quality experiments produces stick-slip recurrence that is periodic at least to second order. When the loading rate is varied, recurrence is approximately inversely proportional to loading rate. These laboratory events initiate due to a slip-rate-dependent process that also determines the size of the stress drop and, as a consequence, stress drop varies weakly but systematically with loading rate. This is especially evident in experiments where the loading rate is changed by orders of magnitude, as is thought to be the loading condition of naturally occurring, small repeating earthquakes driven by afterslip, or low-frequency earthquakes loaded by episodic slip. The experimentally observed stress drops are well described by a logarithmic dependence on recurrence interval that can be cast as a nonlinear slip predictable model. The fault's rate dependence of strength is the key physical parameter. Additionally, even at constant loading rate the most reproducible laboratory recurrence is not exactly periodic, unlike existing friction recurrence models. We present example laboratory catalogs that document the variance and show that in large catalogs, even at constant loading rate, stress drop and recurrence covary systematically. The origin of this covariance is largely consistent with variability of the dependence of fault strength on slip rate. Laboratory catalogs show aspects of both slip and time predictability, and successive stress drops are strongly correlated indicating a "memory" of prior slip history that extends over at least one recurrence cycle.
Use of GPS and InSAR Technology and its Further Development in Earthquake Modeling
NASA Technical Reports Server (NTRS)
Donnellan, A.; Lyzenga, G.; Argus, D.; Peltzer, G.; Parker, J.; Webb, F.; Heflin, M.; Zumberge, J.
1999-01-01
Global Positioning System (GPS) data are useful for understanding both interseismic and postseismic deformation. Models of GPS data suggest that the lower crust, lateral heterogeneity, and fault slip, all provide a role in the earthquake cycle.
NASA Astrophysics Data System (ADS)
Rotondi, Renata; Varini, Elisa
2016-04-01
The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.
NASA Astrophysics Data System (ADS)
Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.
2012-04-01
Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the
NASA Technical Reports Server (NTRS)
Reilinger, Robert
2005-01-01
Our principal activities during the initial phase of this project include: 1) Continued monitoring of postseismic deformation for the 1999 Izmit and Duzce, Turkey earthquakes from repeated GPS survey measurements and expansion of the Marmara Continuous GPS Network (MAGNET), 2) Establishing three North Anatolian fault crossing profiles (10 sitedprofile) at locations that experienced major surface-fault earthquakes at different times in the past to examine strain accumulation as a function of time in the earthquake cycle (2004), 3) Repeat observations of selected sites in the fault-crossing profiles (2005), 4) Repeat surveys of the Marmara GPS network to continue to monitor postseismic deformation, 5) Refining block models for the Marmara Sea seismic gap area to better understand earthquake hazards in the Greater Istanbul area, 6) Continuing development of models for afterslip and distributed viscoelastic deformation for the earthquake cycle. We are keeping close contact with MIT colleagues (Brad Hager, and Eric Hetland) who are developing models for S. California and for the earthquake cycle in general (Hetland, 2006). In addition, our Turkish partners at the Marmara Research Center have undertaken repeat, micro-gravity measurements at the MAGNET sites and have provided us estimates of gravity change during the period 2003 - 2005.
Earthquake triggering, Earth's rotation variations, Meton's cycle and torques acting on the Earth.
NASA Astrophysics Data System (ADS)
Ostrihansky, L.
2012-04-01
In contrast to unsuccessful searching (lasting over 150 years) of correlation of earthquakes with biweekly tides the author found correlation of earthquakes with sidereal 13.66 days Earth's rotation variations expressed as the length of a day (LOD) measured daily by the International Earth's Rotation Service. After short mention about earthquakes Denali Fault Alaska 3rd November 2002, M 7.9, triggered on LOD maximum and Great Sumatra earthquake 26th December 2004 triggered on LOD minimum and the full Moon, the main object of this paper are earthquakes of period 2010-VI. 2011: Haiti M 7.0 Jan. 12, 2010 on LOD minimum, Maule Chile M 8.8 Feb. 12, 2010 on LOD maximum, Sumatra and Andaman Sea region 6 earthquakes revealed from 7 on LOD minimum, New Zealand, Christchurch M 7.1 Sep. 9, 2010 on LOD minimum and Christchurch M 6.3 Feb. 21, 2011 on LOD maximum and Japan Near coast of Honshu M 9.1 March 11, 2011 on LOD minimum. I found that LOD minimums coincide with full or new Moon only twice in a year in solstices and also twice in the year with LOD maximums in equinoxes. To prove that determined coincidences of earthquakes and LOD extremes stated above are not accidental events, histograms were constructed of earthquake occurrence and their position on LOD graph deeply in the past, in some cases from the time the IERS started to measure the Earth's rotation variations in 1962. Evaluation of histograms and the Schuster's test has proven that maxima of earthquakes are triggered always in both Earth's rotation deceleration and acceleration. Backward overview of the past earthquakes revealed that the Great Sumatra earthquake Dec. 26, 2004 had its equivalent in the shape of LOD graph, full Moon position, character of aftershocks, 19 years ago in difference only one day of Dec. 27, 1985 M 6.6, proving that not only sidereal 13.66 days variations but also the 19 years Meton's cycle is the period of the earthquakes occurrence.
Nowcasting Earthquakes and Tsunamis
NASA Astrophysics Data System (ADS)
Rundle, J. B.; Turcotte, D. L.
2017-12-01
The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk
NASA Astrophysics Data System (ADS)
Hampel, Andrea; Hetzel, Ralf
2013-04-01
The friction coefficient is a key parameter for the slip evolution of faults, but how temporal changes in friction affect fault slip is still poorly known. By using three-dimensional numerical models with a thrust fault that is alternately locked and released, we show that variations in the friction coefficient affect both coseismic and long-term fault slip (Hampel and Hetzel, 2012). Decreasing the friction coefficient by 5% while keeping the duration of the interseismic phase constant leads to a four-fold increase in coseismic slip, whereas a 5% increase nearly suppresses slip. A gradual decrease or increase of friction over several earthquake cycles (1-5% per earthquake) considerably alters the cumulative fault slip. In nature, the slip deficit (surplus) resulting from variations in the friction coefficient would presumably be compensated by a longer (shorter) interseismic phase, but the magnitude of the changes required for compensation render variations of the friction coefficient of >5% unlikely. Reference Hampel, A., R. Hetzel (2012) Temporal variation in fault friction and its effects on the slip evolution of a thrust fault over several earthquake cycles. Terra Nova, 24, 357-362, doi: 10.1111/j.1365-3121.2012.01073.x.
Implications of the earthquake cycle for inferring fault locking on the Cascadia megathrust
Pollitz, Fred; Evans, Eileen
2017-01-01
GPS velocity fields in the Western US have been interpreted with various physical models of the lithosphere-asthenosphere system: (1) time-independent block models; (2) time-dependent viscoelastic-cycle models, where deformation is driven by viscoelastic relaxation of the lower crust and upper mantle from past faulting events; (3) viscoelastic block models, a time-dependent variation of the block model. All three models are generally driven by a combination of loading on locked faults and (aseismic) fault creep. Here we construct viscoelastic block models and viscoelastic-cycle models for the Western US, focusing on the Pacific Northwest and the earthquake cycle on the Cascadia megathrust. In the viscoelastic block model, the western US is divided into blocks selected from an initial set of 137 microplates using the method of Total Variation Regularization, allowing potential trade-offs between faulting and megathrust coupling to be determined algorithmically from GPS observations. Fault geometry, slip rate, and locking rates (i.e. the locking fraction times the long term slip rate) are estimated simultaneously within the TVR block model. For a range of mantle asthenosphere viscosity (4.4 × 1018 to 3.6 × 1020 Pa s) we find that fault locking on the megathrust is concentrated in the uppermost 20 km in depth, and a locking rate contour line of 30 mm yr−1 extends deepest beneath the Olympic Peninsula, characteristics similar to previous time-independent block model results. These results are corroborated by viscoelastic-cycle modelling. The average locking rate required to fit the GPS velocity field depends on mantle viscosity, being higher the lower the viscosity. Moreover, for viscosity ≲ 1020 Pa s, the amount of inferred locking is higher than that obtained using a time-independent block model. This suggests that time-dependent models for a range of admissible viscosity structures could refine our knowledge of the locking distribution and its epistemic
Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.
2000-01-01
We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the
Laboratory constraints on models of earthquake recurrence
Beeler, Nicholas M.; Tullis, Terry; Junger, Jenni; Kilgore, Brian D.; Goldsby, David L.
2014-01-01
In this study, rock friction ‘stick-slip’ experiments are used to develop constraints on models of earthquake recurrence. Constant-rate loading of bare rock surfaces in high quality experiments produces stick-slip recurrence that is periodic at least to second order. When the loading rate is varied, recurrence is approximately inversely proportional to loading rate. These laboratory events initiate due to a slip rate-dependent process that also determines the size of the stress drop [Dieterich, 1979; Ruina, 1983] and as a consequence, stress drop varies weakly but systematically with loading rate [e.g., Gu and Wong, 1991; Karner and Marone, 2000; McLaskey et al., 2012]. This is especially evident in experiments where the loading rate is changed by orders of magnitude, as is thought to be the loading condition of naturally occurring, small repeating earthquakes driven by afterslip, or low-frequency earthquakes loaded by episodic slip. As follows from the previous studies referred to above, experimentally observed stress drops are well described by a logarithmic dependence on recurrence interval that can be cast as a non-linear slip-predictable model. The fault’s rate dependence of strength is the key physical parameter. Additionally, even at constant loading rate the most reproducible laboratory recurrence is not exactly periodic, unlike existing friction recurrence models. We present example laboratory catalogs that document the variance and show that in large catalogs, even at constant loading rate, stress drop and recurrence co-vary systematically. The origin of this covariance is largely consistent with variability of the dependence of fault strength on slip rate. Laboratory catalogs show aspects of both slip and time predictability and successive stress drops are strongly correlated indicating a ‘memory’ of prior slip history that extends over at least one recurrence cycle.
Modeling fast and slow earthquakes at various scales
IDE, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138
Modeling fast and slow earthquakes at various scales.
Ide, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.
Earthquakes of the Nepal Himalaya: Towards a physical model of the seismic cycle
NASA Astrophysics Data System (ADS)
Ader, Thomas J.
fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period
The finite, kinematic rupture properties of great-sized earthquakes since 1990
NASA Astrophysics Data System (ADS)
Hayes, Gavin P.
2017-06-01
Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques. I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called ;moment deficit,; calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of ;earthquake super-cycles; observed in some global subduction zones.
Bayesian exploration of recent Chilean earthquakes
NASA Astrophysics Data System (ADS)
Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Liang, Cunren; Agram, Piyush; Owen, Susan; Ortega, Francisco; Minson, Sarah
2016-04-01
The South-American subduction zone is an exceptional natural laboratory for investigating the behavior of large faults over the earthquake cycle. It is also a playground to develop novel modeling techniques combining different datasets. Coastal Chile was impacted by two major earthquakes in the last two years: the 2015 M 8.3 Illapel earthquake in central Chile and the 2014 M 8.1 Iquique earthquake that ruptured the central portion of the 1877 seismic gap in northern Chile. To gain better understanding of the distribution of co-seismic slip for those two earthquakes, we derive joint kinematic finite fault models using a combination of static GPS offsets, radar interferograms, tsunami measurements, high-rate GPS waveforms and strong motion data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the Green's functions. The results reveal different rupture behaviors for the 2014 Iquique and 2015 Illapel earthquakes. The 2014 Iquique earthquake involved a sharp slip zone and did not rupture to the trench. The 2015 Illapel earthquake nucleated close to the coast and propagated toward the trench with significant slip apparently reaching the trench or at least very close to the trench. At the inherent resolution of our models, we also present the relationship of co-seismic models to the spatial distribution of foreshocks, aftershocks and fault coupling models.
Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike
2011-01-01
Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Graham, Shannon E.
Using surface deformation measured by GPS stations within Mexico and Central America, I model coseismic slip, Coulomb stress changes, postseismic afterslip, and slow slip events in order to increase our knowledge of the earthquake deformation cycle in seismically hazardous regions. In Chapter 1, I use GPS data to estimate coseismic slip due to the May 28, 2009 Swan Islands fault earthquake off the coast of Honduras and then use the slip distribution to calculate Coulomb stress changes for the earthquake. Coulomb stress change calculations resolve stress transfer to the seismically hazardous Motagua fault and further show an unclamping of normal faults in northern Honduras. In Chapter 2, the focus shifts to southern Mexico, where continuous GPS measurements since the mid-1990s are revolutionizing our understanding of the flatly subducting Cocos plate. I perform a time-dependent inversion of continuous GPS observations of the 2011-2012 slow slip event (SSE) to estimate the location and magnitude of slow slip preceding the March 20, 2012 Ometepec earthquake. Coulomb stress changes as a result of slip during the SSE are consistent with the hypothesis that the SSE triggered the Ometepec earthquake. Chapter 3 describes inversions for slip both during and after the Ometepec earthquake. Time-dependent modeling of the first six months of postseismic deformation reveals that fault afterslip extended ˜250 km inland to depths of ˜50 km along the Cocos plate subduction. The postseismic afterslip and previous SSEs in southern Mexico occur at similar depths down-dip from the seismogenic zone, indicating that transitional areas of the subduction interface underlie much of southern Mexico. Finally, I perform the first time-dependent modeling of SSEs below Mexico and the first to exploit all available continuous GPS stations in southern and central Mexico. The results provide a more complete and consistent catalog of modeled SSE for the Mexico subduction zone (MSZ) than is
Different Phases of Earthquake Cycle Reflected in GPS Measured Crustal Deformations along the Andes
NASA Astrophysics Data System (ADS)
Khazaradze, G.; Klotz, J.
2001-12-01
largest ever recorded earthquake on the earth. To properly interpret given observations, we developed the fully \\textsc{3D} Andean Elastic Dislocation Model (AEDM), which is used to explain the dominant inter-seismic signal. The subtraction of the AEDM predicted deformation rates from the observations leads towards the "filtered" residual velocity field, that can be used to highlight, for example, the post-seismic deformation effects. Also, in the central section of the SAGA network, the residual velocity field indicates the existence of more long-term (i.e. geologic) deformations. In summary, the changing spatial-temporal pattern of GPS measured crustal deformation rates along the central and southern Andes is governed by the relative importance of different phases of earthquake deformation cycle.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
The Global Earthquake Model - Past, Present, Future
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Stein, Ross
2014-05-01
The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic
An interdisciplinary approach for earthquake modelling and forecasting
NASA Astrophysics Data System (ADS)
Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.
2016-12-01
Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.
Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F
2011-10-04
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.
Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.
2011-01-01
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355
Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)
NASA Astrophysics Data System (ADS)
Crowell, B. W.; Bock, Y.; Squibb, M. B.
2010-12-01
Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.
Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data
NASA Astrophysics Data System (ADS)
Funning, G. J.; Cockett, R.
2012-12-01
InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median
First Results of the Regional Earthquake Likelihood Models Experiment
Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.
2010-01-01
The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).
First Results of the Regional Earthquake Likelihood Models Experiment
NASA Astrophysics Data System (ADS)
Schorlemmer, Danijel; Zechar, J. Douglas; Werner, Maximilian J.; Field, Edward H.; Jackson, David D.; Jordan, Thomas H.
2010-08-01
The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquake prediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.
The 1995 November 22, Mw 7.2 Gulf of Elat earthquake cycle revisited
NASA Astrophysics Data System (ADS)
Baer, Gidon; Funning, Gareth J.; Shamir, Gadi; Wright, Tim J.
2008-12-01
The 1995 November 22, Mw = 7.2 Nuweiba earthquake occurred along one of the left-stepping segments of the Dead Sea Transform (DST) in the Gulf of Elat (Aqaba). It was the largest earthquake along the DST in at least 160 yr. The main shock was preceded by earthquake swarms north and south of its NE-striking rupture since the early 1980s, and was followed by about 6 months of intense aftershock activity, concentrated mainly northwest and southeast of the main rupture. In this study we re-analyse ERS-1 and ERS-2 InSAR data for the period spanning the main shock and 5 post-seismic years. Because the entire rupture was under the Gulf water, surface observations related to the earthquake are limited to distances greater than 5 km away from the rupture zone. Coseismic interferograms were produced for the earthquake +1 week, +4 months and +6 months. Non-linear inversions were carried out for fault geometry and linear inversions were made for slip distribution using an ascending-descending 2-frame data set. The moment calculated from our best-fitting model is in agreement with the seismological moment, but trade-offs exist among several fault parameters. The present model upgrades previous InSAR models of the Nuweiba earthquake, and differs from recent teleseismic waveform inversion results mainly in terms of slip magnitude and distribution. The moment released by post-seismic deformation in the period of 6 months to 2 yr after the Nuweiba earthquake is about 15 per cent of the coseismic moment release. Our models suggest that this deformation can be represented by slip along the lower part of the coseismic rupture. Localised deformation along the Gulf shores NW of the main rupture in the first 6 months after the earthquake is correlated with surface displacements along active Gulf-parallel normal faults and possibly with shallow M > 3.9, D < 6 km aftershocks. The geodetic moment calculated by modelling this deformation is more than an order of magnitude larger than
Toward a comprehensive areal model of earthquake-induced landslides
Miles, S.B.; Keefer, D.K.
2009-01-01
This paper provides a review of regional-scale modeling of earthquake-induced landslide hazard with respect to the needs for disaster risk reduction and sustainable development. Based on this review, it sets out important research themes and suggests computing with words (CW), a methodology that includes fuzzy logic systems, as a fruitful modeling methodology for addressing many of these research themes. A range of research, reviewed here, has been conducted applying CW to various aspects of earthquake-induced landslide hazard zonation, but none facilitate comprehensive modeling of all types of earthquake-induced landslides. A new comprehensive areal model of earthquake-induced landslides (CAMEL) is introduced here that was developed using fuzzy logic systems. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL is highly modifiable and adaptable; new knowledge can be easily added, while existing knowledge can be changed to better match local knowledge and conditions. As such, CAMEL should not be viewed as a complete alternative to other earthquake-induced landslide models. CAMEL provides an open framework for incorporating other models, such as Newmark's displacement method, together with previously incompatible empirical and local knowledge. ?? 2009 ASCE.
Surface Rupture Effects on Earthquake Moment-Area Scaling Relations
NASA Astrophysics Data System (ADS)
Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro
2017-09-01
Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.
NASA Astrophysics Data System (ADS)
Mitsui, Y.; Hirahara, K.
2006-12-01
There have been a lot of studies that simulate large earthquakes occurring quasi-periodically at a subduction zone, based on the laboratory-derived rate-and-state friction law [eg. Kato and Hirasawa (1997), Hirose and Hirahara (2002)]. All of them assume that pore fluid pressure in the fault zone is constant. However, in the fault zone, pore fluid pressure changes suddenly, due to coseismic pore dilatation [Marone (1990)] and thermal pressurization [Mase and Smith (1987)]. If pore fluid pressure drops and effective normal stress rises, fault slip is decelerated. Inversely, if pore fluid pressure rises and effective normal stress drops, fault slip is accelerated. The effect of pore fluid may cause slow slip events and low-frequency tremor [Kodaira et al. (2004), Shelly et al. (2006)]. For a simple spring model, how pore dilatation affects slip instability was investigated [Segall and Rice (1995), Sleep (1995)]. When the rate of the slip becomes high, pore dilatation occurs and pore pressure drops, and the rate of the slip is restrained. Then the inflow of pore fluid recovers the pore pressure. We execute 2D earthquake cycle simulations at a subduction zone, taking into account such changes of pore fluid pressure following Segall and Rice (1995), in addition to the numerical scheme in Kato and Hirasawa (1997). We do not adopt hydrostatic pore pressure but excess pore pressure for initial condition, because upflow of dehydrated water seems to exist at a subduction zone. In our model, pore fluid is confined to the fault damage zone and flows along the plate interface. The smaller the flow rate is, the later pore pressure recovers. Since effective normal stress keeps larger, the fault slip is decelerated and stress drop becomes smaller. Therefore the smaller flow rate along the fault zone leads to the shorter earthquake recurrence time. Thus, not only the frictional parameters and the subduction rate but also the fault zone permeability affects the recurrence time of
FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.
Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.
1985-01-01
The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.
NASA Astrophysics Data System (ADS)
Correa Mora, Francisco
We model surface deformation recorded by GPS stations along the Pacific coasts of Mexico and Central America to estimate the magnitude of and variations in frictional locking (coupling) along the subduction interface, toward a better understanding of seismic hazard in these earthquake-prone regions. The first chapter describes my primary analysis technique, namely 3-dimensional finite element modeling to simulate subduction and bounded-variable inversions that optimize the fit to the GPS velocity field. This chapter focuses on and describes interseismic coupling of the Oaxaca segment of the Mexican subduction zone and introduces an analysis of transient slip events that occur in this region. Our results indicate that coupling is strong within the rupture zone of the 1978 Ms=7.8 Oaxaca earthquake, making this region a potential source of a future large earthquake. However, we also find evidence for significant variations in coupling on the subduction interface over distances of only tens of kilometers, decreasing toward the outer edges of the 1978 rupture zone. In the second chapter, we study in more detail some of the slow slip events that have been recorded over a broad area of southern Mexico, with emphasis on their space-time behavior. Our modeling indicates that transient deformation beneath southern Mexico is focused in two distinct slip patches mostly located downdip from seismogenic areas beneath Guerrero and Oaxaca. Contrary to conclusions reached in one previous study, we find no evidence for a spatial or temporal correlation between transient slip that occurs in these two widely separated source regions. Finally, chapter three extends the modeling techniques to new GPS data in Central America, where subduction coupling is weak or zero and the upper plate deformation is much more complex than in Mexico. Cocos-Caribbean plate convergence beneath El Salvador and Nicaragua is accompanied by subduction and trench-parallel motion of the forearc. Our GPS
Study of the Seismic Cycle of large Earthquakes in central Peru: Lima Region
NASA Astrophysics Data System (ADS)
Norabuena, E. O.; Quiroz, W.; Dixon, T. H.
2009-12-01
Since historical times, the Peruvian subduction zone has been source of large and destructive earthquakes. The more damaging one occurred on May 30 1970 offshore Peru’s northern city of Chimbote with a death toll of 70,000 people and several hundred US million dollars in property damage. More recently, three contiguous plate interface segments in southern Peru completed their seismic cycle generating the 1996 Nazca (Mw 7.1), the 2001 Atico-Arequipa (Mw 8.4) and the 2007 Pisco (Mw 7.9) earthquakes. GPS measurements obtained between 1994-2001 by IGP-CIW an University of Miami-RSMAS on the central Andes of Peru and Bolivia were used to estimate their coseismic displacements and late stage of interseismic strain accumulation. However, we focus our interest in central Peru-Lima region, which with its about 9’000,000 inhabitants is located over a locked plate interface that has not broken with magnitude Mw 8 earthquakes since May 1940, September 1966 and October 1974. We use a network of 11 GPS monuments to estimate the interseismic velocity field, infer spatial variations of interplate coupling and its relation with the background seismicity of the region.
NASA Astrophysics Data System (ADS)
Moernaut, J.; Van Daele, M.; Fontijn, K.; Heirman, K.; Kempf, P.; Pino, M.; Valdebenito, G.; Urrutia, R.; Strasser, M.; De Batist, M.
2018-01-01
Historical and paleoseismic records in south-central Chile indicate that giant earthquakes on the subduction megathrust - such as in AD1960 (Mw 9.5) - reoccur on average every ∼300 yr. Based on geodetic calculations of the interseismic moment accumulation since AD1960, it was postulated that the area already has the potential for a Mw 8 earthquake. However, to estimate the probability of such a great earthquake to take place in the short term, one needs to frame this hypothesis within the long-term recurrence pattern of megathrust earthquakes in south-central Chile. Here we present two long lacustrine records, comprising up to 35 earthquake-triggered turbidites over the last 4800 yr. Calibration of turbidite extent with historical earthquake intensity reveals a different macroseismic intensity threshold (≥VII1/2 vs. ≥VI1/2) for the generation of turbidites at the coring sites. The strongest earthquakes (≥VII1/2) have longer recurrence intervals (292 ±93 yrs) than earthquakes with intensity of ≥VI1/2 (139 ± 69yr). Moreover, distribution fitting and the coefficient of variation (CoV) of inter-event times indicate that the stronger earthquakes recur in a more periodic way (CoV: 0.32 vs. 0.5). Regional correlation of our multi-threshold shaking records with coastal paleoseismic data of complementary nature (tsunami, coseismic subsidence) suggests that the intensity ≥VII1/2 events repeatedly ruptured the same part of the megathrust over a distance of at least ∼300 km and can be assigned to Mw ≥ 8.6. We hypothesize that a zone of high plate locking - identified by geodetic studies and large slip in AD 1960 - acts as a dominant regional asperity, on which elastic strain builds up over several centuries and mostly gets released in quasi-periodic great and giant earthquakes. Our paleo-records indicate that Poissonian recurrence models are inadequate to describe large megathrust earthquake recurrence in south-central Chile. Moreover, they show an enhanced
Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty
Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon
2006-01-01
Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions
NASA Astrophysics Data System (ADS)
Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro
2017-08-01
Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.
NASA Astrophysics Data System (ADS)
Sobolev, Stephan; Muldashev, Iskander
2016-04-01
The key achievement of the geodynamic modelling community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological models to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-scale phenomenon with the time-scales spanning from geological to earthquake scale with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-scale thermomechanical model that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. First we generate a thermo-mechanical model of subduction zone at geological time-scale including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same model classic rate-and state friction law in subduction channel, leading to stick-slip instability. This model generates spontaneous earthquake sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this model predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) earthquakes. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-4year time range.
Combining multiple earthquake models in real time for earthquake early warning
Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.
2017-01-01
The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.
NASA Astrophysics Data System (ADS)
Ide, Satoshi; Maury, Julie
2018-04-01
Tectonic tremors, low-frequency earthquakes, very low-frequency earthquakes, and slow slip events are all regarded as components of broadband slow earthquakes, which can be modeled as a stochastic process using Brownian motion. Here we show that the Brownian slow earthquake model provides theoretical relationships among the seismic moment, seismic energy, and source duration of slow earthquakes and that this model explains various estimates of these quantities in three major subduction zones: Japan, Cascadia, and Mexico. While the estimates for these three regions are similar at the seismological frequencies, the seismic moment rates are significantly different in the geodetic observation. This difference is ascribed to the difference in the characteristic times of the Brownian slow earthquake model, which is controlled by the width of the source area. We also show that the model can include non-Gaussian fluctuations, which better explains recent findings of a near-constant source duration for low-frequency earthquake families.
Modelling the elements of country vulnerability to earthquake disasters.
Asef, M R
2008-09-01
Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.
Future WGCEP Models and the Need for Earthquake Simulators
NASA Astrophysics Data System (ADS)
Field, E. H.
2008-12-01
The 2008 Working Group on California Earthquake Probabilities (WGCEP) recently released the Uniform California Earthquake Rupture Forecast version 2 (UCERF 2), developed jointly by the USGS, CGS, and SCEC with significant support from the California Earthquake Authority. Although this model embodies several significant improvements over previous WGCEPs, the following are some of the significant shortcomings that we hope to resolve in a future UCERF3: 1) assumptions of fault segmentation and the lack of fault-to-fault ruptures; 2) the lack of an internally consistent methodology for computing time-dependent, elastic-rebound-motivated renewal probabilities; 3) the lack of earthquake clustering/triggering effects; and 4) unwarranted model complexity. It is believed by some that physics-based earthquake simulators will be key to resolving these issues, either as exploratory tools to help guide the present statistical approaches, or as a means to forecast earthquakes directly (although significant challenges remain with respect to the latter).
Thurber, C.; Zhang, H.; Waldhauser, F.; Hardebeck, J.; Michael, A.; Eberhart-Phillips, D.
2006-01-01
We present a new three-dimensional (3D) compressional vvavespeed (V p) model for the Parkfield region, taking advantage of the recent seismicity associated with the 2003 San Simeon and 2004 Parkfield earthquake sequences to provide increased model resolution compared to the work of Eberhart-Phillips and Michael (1993) (EPM93). Taking the EPM93 3D model as our starting model, we invert the arrival-time data from about 2100 earthquakes and 250 shots recorded on both permanent network and temporary stations in a region 130 km northeast-southwest by 120 km northwest-southeast. We include catalog picks and cross-correlation and catalog differential times in the inversion, using the double-difference tomography method of Zhang and Thurber (2003). The principal Vp features reported by EPM93 and Michelini and McEvilly (1991) are recovered, but with locally improved resolution along the San Andreas Fault (SAF) and near the active-source profiles. We image the previously identified strong wavespeed contrast (faster on the southwest side) across most of the length of the SAF, and we also improve the image of a high Vp body on the northeast side of the fault reported by EPM93. This narrow body is at about 5- to 12-km depth and extends approximately from the locked section of the SAP to the town of Parkfield. The footwall of the thrust fault responsible for the 1983 Coalinga earthquake is imaged as a northeast-dipping high wavespeed body. In between, relatively low wavespeeds (<5 km/sec) extend to as much as 10-km depth. We use this model to derive absolute locations for about 16,000 earthquakes from 1966 to 2005 and high-precision double-difference locations for 9,000 earthquakes from 1984 to 2005, and also to determine focal mechanisms for 446 earthquakes. These earthquake locations and mechanisms show that the seismogenic fault is a simple planar structure. The aftershock sequence of the 2004 mainshock concentrates into the same structures defined by the pre-2004 seismicity
Earthquake Forecasting in Northeast India using Energy Blocked Model
NASA Astrophysics Data System (ADS)
Mohapatra, A. K.; Mohanty, D. K.
2009-12-01
In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes
Modeling of earthquake ground motion in the frequency domain
NASA Astrophysics Data System (ADS)
Thrainsson, Hjortur
In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation
NASA Astrophysics Data System (ADS)
Chen, Yuh-Ing; Huang, Chi-Shen; Liu, Jann-Yenq
2015-12-01
We investigated the temporal-spatial hazard of the earthquakes after the 1999 September 21 MW = 7.7 Chi-Chi shock in a continental region of Taiwan. The Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989, 1994) that combines the frequency-magnitude distribution (Gutenberg and Richter, 1944) and time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed for assessing the earthquake hazard after a large shock. However, it is found that the b values in the frequency-magnitude distribution of the earthquakes in the study region dramatically decreased from background values after the Chi-Chi shock, and then gradually increased up. The observation of a time-dependent frequency-magnitude distribution motivated us to propose a modified RJ model (MRJ) to assess the earthquake hazard. To see how the models perform on assessing short-term earthquake hazard, the RJ and MRJ models were separately used to sequentially forecast earthquakes in the study region. To depict the potential rupture area for future earthquakes, we further constructed relative hazard (RH) maps based on the two models. The Receiver Operating Characteristics (ROC) curves (Swets, 1988) finally demonstrated that the RH map based on the MRJ model was, in general, superior to the one based on the original RJ model for exploring the spatial hazard of earthquakes in a short time after the Chi-Chi shock.
NASA Astrophysics Data System (ADS)
Strader, Anne; Schorlemmer, Danijel; Beutin, Thomas
2017-04-01
The Global Earthquake Activity Rate Model (GEAR1) is a hybrid seismicity model, constructed from a loglinear combination of smoothed seismicity from the Global Centroid Moment Tensor (CMT) earthquake catalog and geodetic strain rates (Global Strain Rate Map, version 2.1). For the 2005-2012 retrospective evaluation period, GEAR1 outperformed both parent strain rate and smoothed seismicity forecasts. Since 1. October 2015, GEAR1 has been prospectively evaluated by the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center. Here, we present initial one-year test results of the GEAR1, GSRM and GSRM2.1, as well as localized evaluation of GEAR1 performance. The models were evaluated on the consistency in number (N-test), spatial (S-test) and magnitude (M-test) distribution of forecasted and observed earthquakes, as well as overall data consistency (CL-, L-tests). Performance at target earthquake locations was compared between models using the classical paired T-test and its non-parametric equivalent, the W-test, to determine if one model could be rejected in favor of another at the 0.05 significance level. For the evaluation period from 1. October 2015 to 1. October 2016, the GEAR1, GSRM and GSRM2.1 forecasts pass all CSEP likelihood tests. Comparative test results show statistically significant improvement of GEAR1 performance over both strain rate-based forecasts, both of which can be rejected in favor of GEAR1. Using point process residual analysis, we investigate the spatial distribution of differences in GEAR1, GSRM and GSRM2 model performance, to identify regions where the GEAR1 model should be adjusted, that could not be inferred from CSEP test results. Furthermore, we investigate whether the optimal combination of smoothed seismicity and strain rates remains stable over space and time.
GEM - The Global Earthquake Model
NASA Astrophysics Data System (ADS)
Smolka, A.
2009-04-01
Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a
Characteristics of broadband slow earthquakes explained by a Brownian model
NASA Astrophysics Data System (ADS)
Ide, S.; Takeo, A.
2017-12-01
Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic
NASA Astrophysics Data System (ADS)
Johnson, C. W.; Burgmann, R.; Fu, Y.; Dutilleul, P.
2015-12-01
In California the accumulated winter snow pack in the Sierra Nevada, reservoirs and groundwater water storage in the Central Valley follow an annual periodic cycle and each contribute to the resulting surface deformation, which can be observed using GPS time series. The ongoing drought conditions in the western U.S. amplify the observed uplift signal as the Earth's crust responds to the mass changes associated with the water loss. The near surface hydrological mass loss can result in annual stress changes of ~1kPa at seismogenic depths. Similarly, small static stress perturbations have previously been associated with changes in earthquake activity. Periodicity analysis of earthquake catalog time series suggest that periods of 4-, 6-, 12-, and 14.24-months are statistically significant in regions of California, and provide documentation for the modulation of earthquake populations at periods of natural loading cycles. Knowledge of what governs the timing of earthquakes is essential to understanding the nature of the earthquake cycle. If small static stress changes influence the timing of earthquakes, then one could expect that events will occur more rapidly during periods of greater external load increases. To test this hypothesis we develop a loading model using GPS derived surface water storage for California and calculate the stress change at seismogenic depths for different faulting geometries. We then evaluate the degree of correlation between the stress models and the seismicity taking into consideration the variable amplitude of stress cycles, the orientation of transient load stress with respect to the background stress field, and the geometry of active faults revealed by focal mechanisms.
An empirical model for global earthquake fatality estimation
Jaiswal, Kishor; Wald, David
2010-01-01
We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits.
The failure of earthquake failure models
Gomberg, J.
2001-01-01
In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.
Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.; Kubo, H.
2013-12-01
It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.
The finite, kinematic rupture properties of great-sized earthquakes since 1990
Hayes, Gavin
2017-01-01
Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques.I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called “moment deficit,” calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of “earthquake super-cycles” observed in some global subduction zones.
The repetition of large-earthquake ruptures.
Sieh, K
1996-01-01
This survey of well-documented repeated fault rupture confirms that some faults have exhibited a "characteristic" behavior during repeated large earthquakes--that is, the magnitude, distribution, and style of slip on the fault has repeated during two or more consecutive events. In two cases faults exhibit slip functions that vary little from earthquake to earthquake. In one other well-documented case, however, fault lengths contrast markedly for two consecutive ruptures, but the amount of offset at individual sites was similar. Adjacent individual patches, 10 km or more in length, failed singly during one event and in tandem during the other. More complex cases of repetition may also represent the failure of several distinct patches. The faults of the 1992 Landers earthquake provide an instructive example of such complexity. Together, these examples suggest that large earthquakes commonly result from the failure of one or more patches, each characterized by a slip function that is roughly invariant through consecutive earthquake cycles. The persistence of these slip-patches through two or more large earthquakes indicates that some quasi-invariant physical property controls the pattern and magnitude of slip. These data seem incompatible with theoretical models that produce slip distributions that are highly variable in consecutive large events. Images Fig. 3 Fig. 7 Fig. 9 PMID:11607662
Earthquakes: Predicting the unpredictable?
Hough, Susan E.
2005-01-01
The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.
A Model for Low-Frequency Earthquake Slip
NASA Astrophysics Data System (ADS)
Chestler, S. R.; Creager, K. C.
2017-12-01
Using high-resolution relative low-frequency earthquake (LFE) locations, we calculate the patch areas (Ap) of LFE families. During episodic tremor and slip (ETS) events, we define AT as the area that slips during LFEs and ST as the total amount of summed LFE slip. Using observed and calculated values for AP, AT, and ST, we evaluate two end-member models for LFE slip within an LFE family patch. In the ductile matrix model, LFEs produce 100% of the observed ETS slip (SETS) in distinct subpatches (i.e., AT ≪ AP). In the connected patch model, AT = AP, but ST ≪ SETS. LFEs cluster into 45 LFE families. Spatial gaps (˜10 to 20 km) between LFE family clusters and smaller gaps within LFE family clusters serve as evidence that LFE slip is heterogeneous on multiple spatial scales. We find that LFE slip only accounts for ˜0.2% of the slip within the slow slip zone. There are depth-dependent trends in the characteristic (mean) moment and in the number of LFEs during both ETS events (only) and the entire ETS cycle (Mcets and NTets and Mcall and NTall, respectively). During ETS, Mc decreases with downdip distance but NT does not change. Over the entire ETS cycle, Mc decreases with downdip distance, but NT increases. These observations indicate that deeper LFE slip occurs through a larger number (800-1,200) of small LFEs, while updip LFE slip occurs primarily during ETS events through a smaller number (200-600) of larger LFEs. This could indicate that the plate interface is stronger and has a higher stress threshold updip.
Control strategy to limit duty cycle impact of earthquakes on the LIGO gravitational-wave detectors
NASA Astrophysics Data System (ADS)
Biscans, S.; Warner, J.; Mittleman, R.; Buchanan, C.; Coughlin, M.; Evans, M.; Gabbard, H.; Harms, J.; Lantz, B.; Mukund, N.; Pele, A.; Pezerat, C.; Picart, P.; Radkins, H.; Shaffer, T.
2018-03-01
Advanced gravitational-wave detectors such as the laser interferometer gravitational-wave observatories (LIGO) require an unprecedented level of isolation from the ground. When in operation, they measure motion of less than 10‑19 m. Strong teleseismic events like earthquakes disrupt the proper functioning of the detectors, and result in a loss of data. An earthquake early-warning system, as well as a prediction model, have been developed to understand the impact of earthquakes on LIGO. This paper describes a control strategy to use this early-warning system to reduce the LIGO downtime by ∼30%. It also presents a plan to implement this new earthquake configuration in the LIGO automation system.
Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization
NASA Astrophysics Data System (ADS)
Lee, Kyungbook; Song, Seok Goo
2017-09-01
Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.
Human casualties in earthquakes: Modelling and mitigation
Spence, R.J.S.; So, E.K.M.
2011-01-01
Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.
Instability model for recurring large and great earthquakes in southern California
Stuart, W.D.
1985-01-01
The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.
Earthquake recurrence and risk assessment in circum-Pacific seismic gaps
Thatcher, W.
1989-01-01
THE development of the concept of seismic gaps, regions of low earthquake activity where large events are expected, has been one of the notable achievements of seismology and plate tectonics. Its application to long-term earthquake hazard assessment continues to be an active field of seismological research. Here I have surveyed well documented case histories of repeated rupture of the same segment of circum-Pacific plate boundary and characterized their general features. I find that variability in fault slip and spatial extent of great earthquakes rupturing the same plate boundary segment is typical rather than exceptional but sequences of major events fill identified seismic gaps with remarkable order. Earthquakes are concentrated late in the seismic cycle and occur with increasing size and magnitude. Furthermore, earthquake rup-ture starts near zones of concentrated moment release, suggesting that high-slip regions control the timing of recurrent events. The absence of major earthquakes early in the seismic cycle indicates a more complex behaviour for lower-slip regions, which may explain the observed cycle-to-cycle diversity of gap-filling sequences. ?? 1989 Nature Publishing Group.
Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke
2016-05-10
We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.
Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke
2016-01-01
We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897
Modeling the behavior of an earthquake base-isolated building.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coveney, V. A.; Jamil, S.; Johnson, D. E.
1997-11-26
Protecting a structure against earthquake excitation by supporting it on laminated elastomeric bearings has become a widely accepted practice. The ability to perform accurate simulation of the system, including FEA of the bearings, would be desirable--especially for key installations. In this paper attempts to model the behavior of elastomeric earthquake bearings are outlined. Attention is focused on modeling highly-filled, low-modulus, high-damping elastomeric isolator systems; comparisons are made between standard triboelastic solid model predictions and test results.
Understanding and responding to earthquake hazards
NASA Technical Reports Server (NTRS)
Raymond, C. A.; Lundgren, P. R.; Madsen, S. N.; Rundle, J. B.
2002-01-01
Advances in understanding of the earthquake cycle and in assessing earthquake hazards is a topic of great importance. Dynamic earthquake hazard assessments resolved for a range of spatial scales and time scales will allow a more systematic approach to prioritizing the retrofitting of vulnerable structures, relocating populations at risk, protecting lifelines, preparing for disasters, and educating the public.
Interevent times in a new alarm-based earthquake forecasting model
NASA Astrophysics Data System (ADS)
Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed
2013-09-01
This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the
Viscoelastic-cycle model of interseismic deformation in the northwestern United States
Pollitz, F.F.; McCrory, Patricia; Wilson, Doug; Svarc, Jerry; Puskas, Christine; Smith, Robert B.
2010-01-01
We apply a viscoelastic cycle model to a compilation of GPS velocity fields in order to address the kinematics of deformation in the northwestern United States. A viscoelastic cycle model accounts for time-dependent deformation following large crustal earthquakes and is an alternative to block models for explaining the interseismic crustal velocity field. Building on the approach taken in Pollitz et al., we construct a deformation model for the entire western United States-based on combined fault slip and distributed deformation-and focus on the implications for the Mendocino triple junction (MTJ), Cascadia megathrust, and western Washington. We find significant partitioning between strike-slip and dip-slip motion near the MTJ as the tectonic environment shifts from northwest-directed shear along the San Andreas fault system to east-west convergence along the Juan de Fuca Plate. By better accounting for the budget of aseismic and seismic slip along the Cascadia subduction interface in conjunction with an assumed rheology, we revise a previous model of slip for the M~ 9 1700 Cascadia earthquake. In western Washington, we infer slip rates on a number of strike-slip and dip-slip faults that accommodate northward convergence of the Oregon Coast block and northwestward convergence of the Juan de Fuca Plate. Lateral variations in first order mechanical properties (e.g. mantle viscosity, vertically averaged rigidity) explain, to a large extent, crustal strain that cannot be rationalized with cyclic deformation on a laterally homogeneous viscoelastic structure. Our analysis also shows that present crustal deformation measurements, particularly with the addition of the Plate Boundary Observatory, can constrain such lateral variations.
Stochastic dynamic modeling of regular and slow earthquakes
NASA Astrophysics Data System (ADS)
Aso, N.; Ando, R.; Ide, S.
2017-12-01
Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal
Modeling the Fluid Withdraw and Injection Induced Earthquakes
NASA Astrophysics Data System (ADS)
Meng, C.
2016-12-01
We present an open source numerical code, Defmod, that allows one to model the induced seismicity in an efficient and standalone manner. The fluid withdraw and injection induced earthquake has been a great concern to the industries including oil/gas, wastewater disposal and CO2 sequestration. Being able to numerically model the induced seismicity is long desired. To do that, one has to consider at lease two processes, a steady process that describes the inducing and aseismic stages before and in between the seismic events, and an abrupt process that describes the dynamic fault rupture accompanied by seismic energy radiations during the events. The steady process can be adequately modeled by a quasi-static model, while the abrupt process has to be modeled by a dynamic model. In most of the published modeling works, only one of these processes is considered. The geomechanicists and reservoir engineers are focused more on the quasi-static modeling, whereas the geophysicists and seismologists are focused more on the dynamic modeling. The finite element code Defmod combines these two models into a hybrid model that uses the failure criterion and frictional laws to adaptively switch between the (quasi-)static and dynamic states. The code is capable of modeling episodic fault rupture driven by quasi-static loading, e.g. due to reservoir fluid withdraw and/or injection, and by dynamic loading, e.g. due to the foregoing earthquakes. We demonstrate a case study for the 2013 Azle earthquake.
Foreshock and aftershocks in simple earthquake models.
Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R
2015-02-27
Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.
NASA Astrophysics Data System (ADS)
Penserini, Brian D.; Roering, Joshua J.; Streig, Ashley
2017-04-01
In unglaciated steeplands, valley reaches dominated by debris flow scour and incision set landscape form as they often account for > 80% of valley network length and relief. While hillslope and fluvial process models have frequently been combined with digital topography to develop morphologic proxies for erosion rate and drainage divide migration, debris-flow-dominated networks, despite their ubiquity, have not been exploited for this purpose. Here, we applied an empirical function that describes how slope-area data systematically deviate from so-called fluvial power-law behavior at small drainage areas. Using airborne LiDAR data for 83 small ( 1 km2) catchments in the western Oregon Coast Range, we quantified variation in model parameters and observed that the curvature of the power-law scaling deviation varies with catchment-averaged erosion rate estimated from cosmogenic nuclides in stream sediments. Given consistent climate and lithology across our study area and assuming steady erosion, we used this calibrated denudation-morphology relationship to map spatial patterns of long-term uplift for our study catchments. By combining our predicted pattern of long-term uplift rate with paleoseismic and geodetic (tide gauge, GPS, and leveling) data, we estimated the spatial distribution of coseismic subsidence experienced during megathrust earthquakes along the Cascadia Subduction Zone. Our estimates of coseismic subsidence near the coast (0.4 to 0.7 m for earthquake recurrence intervals of 300 to 500 years) agree with field measurements from numerous stratigraphic studies. Our results also demonstrate that coseismic subsidence decreases inland to negligible values > 25 km from the coast, reflecting the diminishing influence of the earthquake deformation cycle on vertical changes of the interior coastal ranges. More generally, our results demonstrate that debris flow valley networks serve as highly localized, yet broadly distributed indicators of erosion (and rock
NASA Astrophysics Data System (ADS)
Taylor, F. W.; Lavier, L. L.; Bevis, M. G.; Thirumalai, K.; Frohlich, C. A.
2012-12-01
Over million-year time scales, what is the relationship between the meter-scale vertical displacements that occur in individual large subduction-zone earthquakes, and the observed topography and geology of island arcs? Because the geographic distribution of vertical displacements associated with the earthquake cycle sometimes mimics topography, it is tempting to assume that vertical deformation simply accrues as the coseismic part of the cycle that is preserved from one event to another. However, our research in the Central New Hebrides and Western Solomon arcs demonstrates that truly permanent tectonic deformation is a step farther removed from the earthquake cycle than we originally assumed. By precisely dating of coral reef terraces we are able to evaluate vertical deformation over time scales of 10,000 to 100,000 years. This analysis indicates that these arcs undergo episodes of hundreds of meters of subsidence and uplift over time scales of tens of thousands of years. Thus what remains in the geologic record is potentially providing invaluable information about more fundamental processes than the elastic earthquake cycle. These longer-term episodes of vertical motion may act in many arcs throughout the world, but evidence of them may be poorly preserved outside of tropical regions where corals along island coastlines provide a record of their occurrence.In our presentation we will describe the tectonic behavior observed in the Central New Hebrides and Western Solomons. We will speculate about some possible mechanisms that explain how the subduction process generates longer-term episodes of subsidence and uplift, and make suggestions about future observations that could better constrain the nature of these processes.
NASA Astrophysics Data System (ADS)
Kubota, T.; Hino, R.; Inazu, D.; Saito, T.; Iinuma, T.; Suzuki, S.; Ito, Y.; Ohta, Y.; Suzuki, K.
2012-12-01
We estimated source models of small amplitude tsunami associated with M-7 class earthquakes in the rupture area of the 2011 Tohoku-Oki Earthquake using near-field records of tsunami recorded by ocean bottom pressure gauges (OBPs). The largest (Mw=7.3) foreshock of the Tohoku-Oki earthquake, occurred on 9 Mar., two days before the mainshock. Tsunami associated with the foreshock was clearly recorded by seven OBPs, as well as coseismic vertical deformation of the seafloor. Assuming a planer fault along the plate boundary as a source, the OBP records were inverted for slip distribution. As a result, the most of the coseismic slip was found to be concentrated in the area of about 40 x 40 km in size and located to the north-west of the epicenter, suggesting downdip rupture propagation. Seismic moment of our tsunami waveform inversion is 1.4 x 10^20 Nm, equivalent to Mw 7.3. On 2011 July 10th, an earthquake of Mw 7.0 occurred near the hypocenter of the mainshock. Its relatively deep focus and strike-slip focal mechanism indicate that this earthquake was an intraslab earthquake. The earthquake was associated with small amplitude tsunami. By using the OBP records, we estimated a model of the initial sea-surface height distribution. Our tsunami inversion showed that a pair of uplift/subsiding eyeballs was required to explain the observed tsunami waveform. The spatial pattern of the seafloor deformation is consistent with the oblique strike-slip solution obtained by the seismic data analyses. The location and strike of the hinge line separating the uplift and subsidence zones correspond well to the linear distribution of the aftershock determined by using local OBS data (Obana et al., 2012).
Seismic hazard assessment over time: Modelling earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting
2017-04-01
To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.
NASA Astrophysics Data System (ADS)
Kaneda, Y.; Kawaguchi, K.; Araki, E.; Matsumoto, H.; Nakamura, T.; Nakano, M.; Kamiya, S.; Ariyoshi, K.; Baba, T.; Ohori, M.; Hori, T.; Takahashi, N.; Kaneko, S.; Donet Research; Development Group
2010-12-01
assimilation method using DONET data is very important to improve the recurrence cycle simulation model. 5) Understanding of the interaction between the crust and upper mantle around the Nankai trough subduction zone. We will deploy DONET not only in the Tonankai seismogenic zone but also DONET2 with high voltages in the Nankai seismogenic zone western the Nankai trough: The total system will be deployed to understand the seismic linkage between the Tonankai and Nankai earthquakes: Using DONET and DONET2 data, we will be able to observe the crustal activities and before and after slips at the Tonankai earthquake and Nankai earthquake. And we will improve the recurrence cycle simulation model by the advanced data assimilation method. Actually, we constructed one observatory in DONET and observed some earthquakes and tsunamis. We will introduce details of DONET/DONET2 and some observed data.
Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.
2012-12-01
It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or
Assessing a 3D smoothed seismicity model of induced earthquakes
NASA Astrophysics Data System (ADS)
Zechar, Jeremy; Király, Eszter; Gischig, Valentin; Wiemer, Stefan
2016-04-01
As more energy exploration and extraction efforts cause earthquakes, it becomes increasingly important to control induced seismicity. Risk management schemes must be improved and should ultimately be based on near-real-time forecasting systems. With this goal in mind, we propose a test bench to evaluate models of induced seismicity based on metrics developed by the CSEP community. To illustrate the test bench, we consider a model based on the so-called seismogenic index and a rate decay; to produce three-dimensional forecasts, we smooth past earthquakes in space and time. We explore four variants of this model using the Basel 2006 and Soultz-sous-Forêts 2004 datasets to make short-term forecasts, test their consistency, and rank the model variants. Our results suggest that such a smoothed seismicity model is useful for forecasting induced seismicity within three days, and giving more weight to recent events improves forecast performance. Moreover, the location of the largest induced earthquake is forecast well by this model. Despite the good spatial performance, the model does not estimate the seismicity rate well: it frequently overestimates during stimulation and during the early post-stimulation period, and it systematically underestimates around shut-in. In this presentation, we also describe a robust estimate of information gain, a modification that can also benefit forecast experiments involving tectonic earthquakes.
NASA Astrophysics Data System (ADS)
Bie, L.; Rietbrock, A.; Agurto-Detzel, H.
2017-12-01
The forearc region in subduction zones deforms in response to relative movement on the plate interface throughout the earthquake cycle. Megathrust earthquakes may alter the stress field in the forearc areas from compression to extension, resulting in normal faulting earthquakes. Recent cases include the 2011 Iwaki sequence following the Tohoku-Oki earthquake in Japan, and 2010 Pichilemu sequence after the Maule earthquake in central Chile. Given the closeness of these normal fault events to residential areas, and their shallow depth, they may pose equivalent, if not higher, seismic risk in comparison to earthquakes on the megathrust. Here, we focus on the 2010 Pichilemu sequence following the Mw 8.8 Maule earthquake in central Chile, where the Nazca Plate subducts beneath the South American Plate. Previous studies have clearly delineated the Pichilemu normal fault structure. However, it is not clear whether the Pichilemu events fully released the extensional stress exerted by the Maule mainshock, or the forearc area is still controlled by extensional stress. A 3 months displacement time-series, constructed by radar satellite images, clearly shows continuous aseismic deformation along the Pichilemu fault. Kinematic inversion reveals peak afterslip of 25 cm at shallow depth, equivalent to a Mw 5.4 earthquake. We identified a Mw 5.3 earthquake 2 months after the Pichilemu sequence from both geodetic and seismic observations. Nonlinear inversion from geodetic data suggests that this event ruptured a normal fault conjugate to the Pichilemu fault, at a depth of 4.5 km, consistent with the result obtained from independent moment tensor inversion. We relocated aftershocks in the Pichilemu area using relative arrivals time and a 3D velocity model. The spatial correlation between geodetic deformation and aftershocks reveals three additional areas which may have experienced aseismic slip at depth. Both geodetic displacement and aftershock distribution show a conjugated L
Earthquake and tsunami forecasts: Relation of slow slip events to subsequent earthquake rupture
Dixon, Timothy H.; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor
2014-01-01
The 5 September 2012 Mw 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr–Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential. PMID:25404327
Earthquake and tsunami forecasts: relation of slow slip events to subsequent earthquake rupture.
Dixon, Timothy H; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor
2014-12-02
The 5 September 2012 M(w) 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr-Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential.
Moon Connection with MEGA and Giant Earthquakes in Subduction Zones during One Solar Cycle
NASA Astrophysics Data System (ADS)
Hagen, M. T.; Azevedo, A. T.
2016-12-01
We investigated in this paper the possible influences of the moon on earthquakes during one Solar cycle. The Earth - Moon gravitational force produces a variation in the perigee force that may trigger seismological events. The oscillation force creates a wave that is generated by the moon rotation around the earth, which takes a month. The wave complete a cycle after 13- 14 months in average and the period is roughly 5400 hours as calculated. The major moon phases which are New and Full Moon is when the perigee force is stronger. The Solar Wind charges the Moon during the New phases. The plasmasphere charges the satellite during the Full Moon. Both create the Spring Tides what affects mostly the subduction zones connected with the Mega and Giant events in Pacific areas. Moon - Earth connections are resilient in locations with convergent tectonic plates. Inserted:
Earthquake Clustering in Noisy Viscoelastic Systems
NASA Astrophysics Data System (ADS)
Dicaprio, C. J.; Simons, M.; Williams, C. A.; Kenner, S. J.
2006-12-01
Geologic studies show evidence for temporal clustering of earthquakes on certain fault systems. Since post- seismic deformation may result in a variable loading rate on a fault throughout the inter-seismic period, it is reasonable to expect that the rheology of the non-seismogenic lower crust and mantle lithosphere may play a role in controlling earthquake recurrence times. Previously, the role of rheology of the lithosphere on the seismic cycle had been studied with a one-dimensional spring-dashpot-slider model (Kenner and Simons [2005]). In this study we use the finite element code PyLith to construct a two-dimensional continuum model a strike-slip fault in an elastic medium overlying one or more linear Maxwell viscoelastic layers loaded in the far field by a constant velocity boundary condition. Taking advantage of the linear properties of the model, we use the finite element solution to one earthquake as a spatio-temporal Green's function. Multiple Green's function solutions, scaled by the size of each earthquake, are then summed to form an earthquake sequence. When the shear stress on the fault reaches a predefined yield stress it is allowed to slip, relieving all accumulated shear stress. Random variation in the fault yield stress from one earthquake to the next results in a temporally clustered earthquake sequence. The amount of clustering depends on a non-dimensional number, W, called the Wallace number. For models with one viscoelastic layer, W is equal to the standard deviation of the earthquake stress drop divided by the viscosity times the tectonic loading rate. This definition of W is modified from the original one used in Kenner and Simons [2005] by using the standard deviation of the stress drop instead of the mean stress drop. We also use a new, more appropriate, metric to measure the amount of temporal clustering of the system. W is the ratio of the viscoelastic relaxation rate of the system to the tectonic loading rate of the system. For values of
Intraplate triggered earthquakes: Observations and interpretation
Hough, S.E.; Seeber, L.; Armbruster, J.G.
2003-01-01
We present evidence that at least two of the three 1811-1812 New Madrid, central United States, mainshocks and the 1886 Charleston, South Carolina, earthquake triggered earthquakes at regional distances. In addition to previously published evidence for triggered earthquakes in the northern Kentucky/southern Ohio region in 1812, we present evidence suggesting that triggered events might have occurred in the Wabash Valley, to the south of the New Madrid Seismic Zone, and near Charleston, South Carolina. We also discuss evidence that earthquakes might have been triggered in northern Kentucky within seconds of the passage of surface waves from the 23 January 1812 New Madrid mainshock. After the 1886 Charleston earthquake, accounts suggest that triggered events occurred near Moodus, Connecticut, and in southern Indiana. Notwithstanding the uncertainty associated with analysis of historical accounts, there is evidence that at least three out of the four known Mw 7 earthquakes in the central and eastern United States seem to have triggered earthquakes at distances beyond the typically assumed aftershock zone of 1-2 mainshock fault lengths. We explore the possibility that remotely triggered earthquakes might be common in low-strain-rate regions. We suggest that in a low-strain-rate environment, permanent, nonelastic deformation might play a more important role in stress accumulation than it does in interplate crust. Using a simple model incorporating elastic and anelastic strain release, we show that, for realistic parameter values, faults in intraplate crust remain close to their failure stress for a longer part of the earthquake cycle than do faults in high-strain-rate regions. Our results further suggest that remotely triggered earthquakes occur preferentially in regions of recent and/or future seismic activity, which suggests that faults are at a critical stress state in only some areas. Remotely triggered earthquakes may thus serve as beacons that identify regions of
Redefining Earthquakes and the Earthquake Machine
ERIC Educational Resources Information Center
Hubenthal, Michael; Braile, Larry; Taber, John
2008-01-01
The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…
ARMA models for earthquake ground motions. Seismic safety margins research program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.
1981-02-01
Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less
Modelling earthquake ruptures with dynamic off-fault damage
NASA Astrophysics Data System (ADS)
Okubo, Kurama; Bhat, Harsha S.; Klinger, Yann; Rougier, Esteban
2017-04-01
Earthquake rupture modelling has been developed for producing scenario earthquakes. This includes understanding the source mechanisms and estimating far-field ground motion with given a priori constraints like fault geometry, constitutive law of the medium and friction law operating on the fault. It is necessary to consider all of the above complexities of a fault systems to conduct realistic earthquake rupture modelling. In addition to the complexity of the fault geometry in nature, coseismic off-fault damage, which is observed by a variety of geological and seismological methods, plays a considerable role on the resultant ground motion and its spectrum compared to a model with simple planer fault surrounded by purely elastic media. Ideally all of these complexities should be considered in earthquake modelling. State of the art techniques developed so far, however, cannot treat all of them simultaneously due to a variety of computational restrictions. Therefore, we adopt the combined finite-discrete element method (FDEM), which can effectively deal with pre-existing complex fault geometry such as fault branches and kinks and can describe coseismic off-fault damage generated during the dynamic rupture. The advantage of FDEM is that it can handle a wide range of length scales, from metric to kilometric scale, corresponding to the off-fault damage and complex fault geometry respectively. We used the FDEM-based software tool called HOSSedu (Hybrid Optimization Software Suite - Educational Version) for the earthquake rupture modelling, which was developed by Los Alamos National Laboratory. We firstly conducted the cross-validation of this new methodology against other conventional numerical schemes such as the finite difference method (FDM), the spectral element method (SEM) and the boundary integral equation method (BIEM), to evaluate the accuracy with various element sizes and artificial viscous damping values. We demonstrate the capability of the FDEM tool for
From Geodetic Imaging of Seismic and Aseismic Fault Slip to Dynamic Modeling of the Seismic Cycle
NASA Astrophysics Data System (ADS)
Avouac, Jean-Philippe
2015-05-01
Understanding the partitioning of seismic and aseismic fault slip is central to seismotectonics as it ultimately determines the seismic potential of faults. Thanks to advances in tectonic geodesy, it is now possible to develop kinematic models of the spatiotemporal evolution of slip over the seismic cycle and to determine the budget of seismic and aseismic slip. Studies of subduction zones and continental faults have shown that aseismic creep is common and sometimes prevalent within the seismogenic depth range. Interseismic coupling is generally observed to be spatially heterogeneous, defining locked patches of stress accumulation, to be released in future earthquakes or aseismic transients, surrounded by creeping areas. Clay-rich tectonites, high temperature, and elevated pore-fluid pressure seem to be key factors promoting aseismic creep. The generally logarithmic time evolution of afterslip is a distinctive feature of creeping faults that suggests a logarithmic dependency of fault friction on slip rate, as observed in laboratory friction experiments. Most faults can be considered to be paved with interlaced patches where the friction law is either rate-strengthening, inhibiting seismic rupture propagation, or rate-weakening, allowing for earthquake nucleation. The rate-weakening patches act as asperities on which stress builds up in the interseismic period; they might rupture collectively in a variety of ways. The pattern of interseismic coupling can help constrain the return period of the maximum- magnitude earthquake based on the requirement that seismic and aseismic slip sum to match long-term slip. Dynamic models of the seismic cycle based on this conceptual model can be tuned to reproduce geodetic and seismological observations. The promise and pitfalls of using such models to assess seismic hazard are discussed.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the
Space-Time Earthquake Rate Models for One-Year Hazard Forecasts in Oklahoma
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Michael, A. J.
2017-12-01
The recent one-year seismic hazard assessments for natural and induced seismicity in the central and eastern US (CEUS) (Petersen et al., 2016, 2017) rely on earthquake rate models based on declustered catalogs (i.e., catalogs with foreshocks and aftershocks removed), as is common practice in probabilistic seismic hazard analysis. However, standard declustering can remove over 90% of some induced sequences in the CEUS. Some of these earthquakes may still be capable of causing damage or concern (Petersen et al., 2015, 2016). The choices of whether and how to decluster can lead to seismicity rate estimates that vary by up to factors of 10-20 (Llenos and Michael, AGU, 2016). Therefore, in order to improve the accuracy of hazard assessments, we are exploring ways to make forecasts based on full, rather than declustered, catalogs. We focus on Oklahoma, where earthquake rates began increasing in late 2009 mainly in central Oklahoma and ramped up substantially in 2013 with the expansion of seismicity into northern Oklahoma and southern Kansas. We develop earthquake rate models using the space-time Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988; Ogata, AISM, 1998; Zhuang et al., JASA, 2002), which characterizes both the background seismicity rate as well as aftershock triggering. We examine changes in the model parameters over time, focusing particularly on background rate, which reflects earthquakes that are triggered by external driving forces such as fluid injection rather than other earthquakes. After the model parameters are fit to the seismicity data from a given year, forecasts of the full catalog for the following year can then be made using a suite of 100,000 ETAS model simulations based on those parameters. To evaluate this approach, we develop pseudo-prospective yearly forecasts for Oklahoma from 2013-2016 and compare them with the observations using standard Collaboratory for the Study of Earthquake Predictability tests for consistency.
From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)
NASA Astrophysics Data System (ADS)
Jordan, T. H.
2009-12-01
Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.
The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?
Kilb, Debi; Gomberg, J.
1999-01-01
We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.
NASA Astrophysics Data System (ADS)
Ayele, Atalay; Midzi, Vunganai; Ateba, Bekoa; Mulabisana, Thifhelimbilu; Marimira, Kwangwari; Hlatywayo, Dumisani J.; Akpan, Ofonime; Amponsah, Paulina; Georges, Tuluka M.; Durrheim, Ray
2013-04-01
Large magnitude earthquakes have been observed in Sub-Saharan Africa in the recent past, such as the Machaze event of 2006 (Mw, 7.0) in Mozambique and the 2009 Karonga earthquake (Mw 6.2) in Malawi. The December 13, 1910 earthquake (Ms = 7.3) in the Rukwa rift (Tanzania) is the largest of all instrumentally recorded events known to have occurred in East Africa. The overall earthquake hazard in the region is on the lower side compared to other earthquake prone areas in the globe. However, the risk level is high enough for it to receive attention of the African governments and the donor community. The latest earthquake hazard map for the sub-Saharan Africa was done in 1999 and updating is long overdue as several development activities in the construction industry is booming allover sub-Saharan Africa. To this effect, regional seismologists are working together under the GEM (Global Earthquake Model) framework to improve incomplete, inhomogeneous and uncertain catalogues. The working group is also contributing to the UNESCO-IGCP (SIDA) 601 project and assessing all possible sources of data for the catalogue as well as for the seismotectonic characteristics that will help to develop a reasonable hazard model in the region. In the current progress, it is noted that the region is more seismically active than we thought. This demands the coordinated effort of the regional experts to systematically compile all available information for a better output so as to mitigate earthquake risk in the sub-Saharan Africa.
Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.
2008-01-01
We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.
Time-dependent geoid anomalies at subduction zones due to the seismic cycle
NASA Astrophysics Data System (ADS)
Cambiotti, G.; Sabadini, R.; Yuen, D. A.
2018-01-01
We model the geoid anomalies excited during a megathrust earthquake cycle at subduction zones, including the interseismic phase and the contribution from the infinite series of previous earthquakes, within the frame of self-gravitating, spherically symmetric, compressible, viscoelastic Earth models. The fault cuts the whole 50 km lithosphere, dips 20°, and the slip amplitude, together with the length of the fault, are chosen in order to simulate an Mw = 9.0 earthquake, while the viscosity of the 170 km thick asthenosphere ranges from 1017 to 1020 Pa s. On the basis of a new analysis from the Correspondence Principle, we show that the geoid anomaly is characterized by a periodic anomaly due to the elastic and viscous contribution from past earthquakes and to the back-slip of the interseismic phase, and by a smaller static contribution from the steady-state response to the previous infinite earthquake cycles. For asthenospheric viscosities from 1017-1018 to 1019-1020 Pa s, the characteristic relaxation times of the Earth model change from shorter to longer timescales compared to the 400 yr earthquake recurrence time, which dampen the geoid anomaly for the higher asthenospheric viscosities, since the slower relaxation cannot contribute its whole strength within the interseismic cycle. The geoid anomaly pattern is characterized by a global, time-dependent positive upwarping of the geoid topography, involving the whole hanging wall and partially the footwall compared to the sharper elastic contribution, attaining, for a moment magnitude Mw = 9.0, amplitudes as high as 6.6 cm for the lowermost asthenospheric viscosities during the viscoelastic response compared to the elastic maximum of 3.8 cm. The geoid anomaly vanishes due to the back-slip of the interseismic phase, leading to its disappearance at the end of the cycle before the next earthquake. Our results are of importance for understanding the post-seismic and interseismic geoid patterns at subduction zones.
Pre-Earthquake Unipolar Electromagnetic Pulses
NASA Astrophysics Data System (ADS)
Scoville, J.; Freund, F.
2013-12-01
Transient ultralow frequency (ULF) electromagnetic (EM) emissions have been reported to occur before earthquakes [1,2]. They suggest powerful transient electric currents flowing deep in the crust [3,4]. Prior to the M=5.4 Alum Rock earthquake of Oct. 21, 2007 in California a QuakeFinder triaxial search-coil magnetometer located about 2 km from the epicenter recorded unusual unipolar pulses with the approximate shape of a half-cycle of a sine wave, reaching amplitudes up to 30 nT. The number of these unipolar pulses increased as the day of the earthquake approached. These pulses clearly originated around the hypocenter. The same pulses have since been recorded prior to several medium to moderate earthquakes in Peru, where they have been used to triangulate the location of the impending earthquakes [5]. To understand the mechanism of the unipolar pulses, we first have to address the question how single current pulses can be generated deep in the Earth's crust. Key to this question appears to be the break-up of peroxy defects in the rocks in the hypocenter as a result of the increase in tectonic stresses prior to an earthquake. We investigate the mechanism of the unipolar pulses by coupling the drift-diffusion model of semiconductor theory to Maxwell's equations, thereby producing a model describing the rock volume that generates the pulses in terms of electromagnetism and semiconductor physics. The system of equations is then solved numerically to explore the electromagnetic radiation associated with drift-diffusion currents of electron-hole pairs. [1] Sharma, A. K., P. A. V., and R. N. Haridas (2011), Investigation of ULF magnetic anomaly before moderate earthquakes, Exploration Geophysics 43, 36-46. [2] Hayakawa, M., Y. Hobara, K. Ohta, and K. Hattori (2011), The ultra-low-frequency magnetic disturbances associated with earthquakes, Earthquake Science, 24, 523-534. [3] Bortnik, J., T. E. Bleier, C. Dunson, and F. Freund (2010), Estimating the seismotelluric current
Effect of data quality on a hybrid Coulomb/STEP model for earthquake forecasting
NASA Astrophysics Data System (ADS)
Steacy, Sandy; Jimenez, Abigail; Gerstenberger, Matt; Christophersen, Annemarie
2014-05-01
Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip. Specifically, we consider slip models based on the NEIC location, the CMT solution, surface rupture, and published inversions and find significant variation in the relative performance of the models depending upon the input data.
Transient triggering of near and distant earthquakes
Gomberg, J.; Blanpied, M.L.; Beeler, N.M.
1997-01-01
We demonstrate qualitatively that frictional instability theory provides a context for understanding how earthquakes may be triggered by transient loads associated with seismic waves from near and distance earthquakes. We assume that earthquake triggering is a stick-slip process and test two hypotheses about the effect of transients on the timing of instabilities using a simple spring-slider model and a rate- and state-dependent friction constitutive law. A critical triggering threshold is implicit in such a model formulation. Our first hypothesis is that transient loads lead to clock advances; i.e., transients hasten the time of earthquakes that would have happened eventually due to constant background loading alone. Modeling results demonstrate that transient loads do lead to clock advances and that the triggered instabilities may occur after the transient has ceased (i.e., triggering may be delayed). These simple "clock-advance" models predict complex relationships between the triggering delay, the clock advance, and the transient characteristics. The triggering delay and the degree of clock advance both depend nonlinearly on when in the earthquake cycle the transient load is applied. This implies that the stress required to bring about failure does not depend linearly on loading time, even when the fault is loaded at a constant rate. The timing of instability also depends nonlinearly on the transient loading rate, faster rates more rapidly hastening instability. This implies that higher-frequency and/or longer-duration seismic waves should increase the amount of clock advance. These modeling results and simple calculations suggest that near (tens of kilometers) small/moderate earthquakes and remote (thousands of kilometers) earthquakes with magnitudes 2 to 3 units larger may be equally effective at triggering seismicity. Our second hypothesis is that some triggered seismicity represents earthquakes that would not have happened without the transient load (i
NASA Astrophysics Data System (ADS)
Ortega Culaciati, F. H.; Simons, M.; Minson, S. E.; Owen, S. E.; Moore, A. W.; Hetland, E. A.
2011-12-01
We aim to quantify the spatial distribution of after-slip following the Great 11 March 2011 Tohoku-Oki (Mw 9.0) earthquake and its implications for the occurrence of a future Great Earthquake, particularly in the Ibaraki region of Japan. We use a Bayesian approach (CATMIP algorithm), constrained by on-land Geonet GPS time series, to infer models of after-slip to date in the Japan megathrust. Unlike traditional inverse methods, in which a single optimum model is found, the Bayesian approach allows a complete characterization of the model parameter space by searching a-posteriori estimates of the range of plausible models. We use the Kullback-Liebler information divergence as a metric of the information gain on each subsurface slip patch, to quantify the extent to which land-based geodetic observations can constrain the upper parts of the megathrust, where the Great Tohoku-Oki earthquake took place. We aim to understand the relationships of spatial distribution of fault slip behavior in the different stages of the seismic cycle. We compare our post-seismic slip distributions to inter- and co-seismic slip distributions obtained through a Bayesian methodology as well as through traditional (optimization) inverse estimates in the published literature. We discuss implications of these analyses for the occurrence of a large earthquake in the Japan megathrust regions adjacent to the Great Tohoku-Oki earthquake.
Rapid tsunami models and earthquake source parameters: Far-field and local applications
Geist, E.L.
2005-01-01
Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
NASA Astrophysics Data System (ADS)
McNamara, D. E.; Yeck, W. L.; Barnhart, W. D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, A.; Hough, S. E.; Benz, H. M.; Earle, P. S.
2017-09-01
The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard. Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10-15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.
McNamara, Daniel E.; Yeck, William; Barnhart, William D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, Amod; Hough, S.E.; Benz, Harley M.; Earle, Paul
2017-01-01
The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard.Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a ~ 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10–15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.
Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination
NASA Astrophysics Data System (ADS)
Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.
2008-12-01
Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.
Earthquakes drive focused denudation along a tectonically active mountain front
NASA Astrophysics Data System (ADS)
Li, Gen; West, A. Joshua; Densmore, Alexander L.; Jin, Zhangdong; Zhang, Fei; Wang, Jin; Clark, Marin; Hilton, Robert G.
2017-08-01
Earthquakes cause widespread landslides that can increase erosional fluxes observed over years to decades. However, the impact of earthquakes on denudation over the longer timescales relevant to orogenic evolution remains elusive. Here we assess erosion associated with earthquake-triggered landslides in the Longmen Shan range at the eastern margin of the Tibetan Plateau. We use the Mw 7.9 2008 Wenchuan and Mw 6.6 2013 Lushan earthquakes to evaluate how seismicity contributes to the erosional budget from short timescales (annual to decadal, as recorded by sediment fluxes) to long timescales (kyr to Myr, from cosmogenic nuclides and low temperature thermochronology). Over this wide range of timescales, the highest rates of denudation in the Longmen Shan coincide spatially with the region of most intense landsliding during the Wenchuan earthquake. Across sixteen gauged river catchments, sediment flux-derived denudation rates following the Wenchuan earthquake are closely correlated with seismic ground motion and the associated volume of Wenchuan-triggered landslides (r2 > 0.6), and to a lesser extent with the frequency of high intensity runoff events (r2 = 0.36). To assess whether earthquake-induced landsliding can contribute importantly to denudation over longer timescales, we model the total volume of landslides triggered by earthquakes of various magnitudes over multiple earthquake cycles. We combine models that predict the volumes of landslides triggered by earthquakes, calibrated against the Wenchuan and Lushan events, with an earthquake magnitude-frequency distribution. The long-term, landslide-sustained "seismic erosion rate" is similar in magnitude to regional long-term denudation rates (∼0.5-1 mm yr-1). The similar magnitude and spatial coincidence suggest that earthquake-triggered landslides are a primary mechanism of long-term denudation in the frontal Longmen Shan. We propose that the location and intensity of seismogenic faulting can contribute to
Recurrent slow slip event likely hastened by the 2011 Tohoku earthquake
Hirose, Hitoshi; Kimura, Hisanori; Enescu, Bogdan; Aoi, Shin
2012-01-01
Slow slip events (SSEs) are another mode of fault deformation than the fast faulting of regular earthquakes. Such transient episodes have been observed at plate boundaries in a number of subduction zones around the globe. The SSEs near the Boso Peninsula, central Japan, are among the most documented SSEs, with the longest repeating history, of almost 30 y, and have a recurrence interval of 5 to 7 y. A remarkable characteristic of the slow slip episodes is the accompanying earthquake swarm activity. Our stable, long-term seismic observations enable us to detect SSEs using the recorded earthquake catalog, by considering an earthquake swarm as a proxy for a slow slip episode. Six recurrent episodes are identified in this way since 1982. The average duration of the SSE interoccurrence interval is 68 mo; however, there are significant fluctuations from this mean. While a regular cycle can be explained using a simple physical model, the mechanisms that are responsible for the observed fluctuations are poorly known. Here we show that the latest SSE in the Boso Peninsula was likely hastened by the stress transfer from the March 11, 2011 great Tohoku earthquake. Moreover, a similar mechanism accounts for the delay of an SSE in 1990 by a nearby earthquake. The low stress buildups and drops during the SSE cycle can explain the strong sensitivity of these SSEs to stress transfer from external sources. PMID:22949688
NASA Astrophysics Data System (ADS)
So, E.
2010-12-01
Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from
Application of a long-range forecasting model to earthquakes in the Japan mainland testing region
NASA Astrophysics Data System (ADS)
Rhoades, David A.
2011-03-01
The Every Earthquake a Precursor According to Scale (EEPAS) model is a long-range forecasting method which has been previously applied to a number of regions, including Japan. The Collaboratory for the Study of Earthquake Predictability (CSEP) forecasting experiment in Japan provides an opportunity to test the model at lower magnitudes than previously and to compare it with other competing models. The model sums contributions to the rate density from past earthquakes based on predictive scaling relations derived from the precursory scale increase phenomenon. Two features of the earthquake catalogue in the Japan mainland region create difficulties in applying the model, namely magnitude-dependence in the proportion of aftershocks and in the Gutenberg-Richter b-value. To accommodate these features, the model was fitted separately to earthquakes in three different target magnitude classes over the period 2000-2009. There are some substantial unexplained differences in parameters between classes, but the time and magnitude distributions of the individual earthquake contributions are such that the model is suitable for three-month testing at M ≥ 4 and for one-year testing at M ≥ 5. In retrospective analyses, the mean probability gain of the EEPAS model over a spatially smoothed seismicity model increases with magnitude. The same trend is expected in prospective testing. The Proximity to Past Earthquakes (PPE) model has been submitted to the same testing classes as the EEPAS model. Its role is that of a spatially-smoothed reference model, against which the performance of time-varying models can be compared.
Short- and Long-Term Earthquake Forecasts Based on Statistical Models
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Taroni, Matteo; Murru, Maura; Falcone, Giuseppe; Marzocchi, Warner
2017-04-01
The epidemic-type aftershock sequences (ETAS) models have been experimentally used to forecast the space-time earthquake occurrence rate during the sequence that followed the 2009 L'Aquila earthquake and for the 2012 Emilia earthquake sequence. These forecasts represented the two first pioneering attempts to check the feasibility of providing operational earthquake forecasting (OEF) in Italy. After the 2009 L'Aquila earthquake the Italian Department of Civil Protection nominated an International Commission on Earthquake Forecasting (ICEF) for the development of the first official OEF in Italy that was implemented for testing purposes by the newly established "Centro di Pericolosità Sismica" (CPS, the seismic Hazard Center) at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). According to the ICEF guidelines, the system is open, transparent, reproducible and testable. The scientific information delivered by OEF-Italy is shaped in different formats according to the interested stakeholders, such as scientists, national and regional authorities, and the general public. The communication to people is certainly the most challenging issue, and careful pilot tests are necessary to check the effectiveness of the communication strategy, before opening the information to the public. With regard to long-term time-dependent earthquake forecast, the application of a newly developed simulation algorithm to Calabria region provided typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range.
Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes
Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.
2014-01-01
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467
Chilean megathrust earthquake recurrence linked to frictional contrast at depth
NASA Astrophysics Data System (ADS)
Moreno, M.; Li, S.; Melnick, D.; Bedford, J. R.; Baez, J. C.; Motagh, M.; Metzger, S.; Vajedian, S.; Sippl, C.; Gutknecht, B. D.; Contreras-Reyes, E.; Deng, Z.; Tassara, A.; Oncken, O.
2018-04-01
Fundamental processes of the seismic cycle in subduction zones, including those controlling the recurrence and size of great earthquakes, are still poorly understood. Here, by studying the 2016 earthquake in southern Chile—the first large event within the rupture zone of the 1960 earthquake (moment magnitude (Mw) = 9.5)—we show that the frictional zonation of the plate interface fault at depth mechanically controls the timing of more frequent, moderate-size deep events (Mw < 8) and less frequent, tsunamigenic great shallow earthquakes (Mw > 8.5). We model the evolution of stress build-up for a seismogenic zone with heterogeneous friction to examine the link between the 2016 and 1960 earthquakes. Our results suggest that the deeper segments of the seismogenic megathrust are weaker and interseismically loaded by a more strongly coupled, shallower asperity. Deeper segments fail earlier ( 60 yr recurrence), producing moderate-size events that precede the failure of the shallower region, which fails in a great earthquake (recurrence >110 yr). We interpret the contrasting frictional strength and lag time between deeper and shallower earthquakes to be controlled by variations in pore fluid pressure. Our integrated analysis strengthens understanding of the mechanics and timing of great megathrust earthquakes, and therefore could aid in the seismic hazard assessment of other subduction zones.
PAGER-CAT: A composite earthquake catalog for calibrating global fatality models
Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.
2009-01-01
We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are
Modelling low-frequency volcanic earthquakes in a viscoelastic medium with topography
NASA Astrophysics Data System (ADS)
Jousset, Philippe; Neuberg, Jürgen; Jolly, Arthur
2004-11-01
Magma properties are fundamental to explain the volcanic eruption style as well as the generation and propagation of seismic waves. This study focusses on magma properties and rheology and their impact on low-frequency volcanic earthquakes. We investigate the effects of anelasticity and topography on the amplitudes and spectra of synthetic low-frequency earthquakes. Using a 2-D finite-difference scheme, we model the propagation of seismic energy initiated in a fluid-filled conduit embedded in a homogeneous viscoelastic medium with topography. We model intrinsic attenuation by linear viscoelastic theory and we show that volcanic media can be approximated by a standard linear solid (SLS) for seismic frequencies above 2 Hz. Results demonstrate that attenuation modifies both amplitudes and dispersive characteristics of low-frequency earthquakes. Low frequency volcanic earthquakes are dispersive by nature; however, if attenuation is introduced, their dispersion characteristics will be altered. The topography modifies the amplitudes, depending on the position of the seismographs at the surface. This study shows that we need to take into account attenuation and topography to interpret correctly observed low-frequency volcanic earthquakes. It also suggests that the rheological properties of magmas may be constrained by the analysis of low-frequency seismograms.
NASA Astrophysics Data System (ADS)
van Rijsingen, E.; Lallemand, S.; Peyret, M.; Corbi, F.; Funiciello, F.; Arcay, D.; Heuret, A.
2017-12-01
The role of subducting oceanic features on the seismogenic behavior of subduction zones has been increasingly addressed over the past years, although their exact relationship remains unclear. Do features like seamounts, fracture zones or submarine ridges act as barriers, prohibiting ruptures to propagate, or do they initiate megathrust earthquakes instead? With this question in mind, we aim to better understand the influence of subduction interface roughness on the location of an earthquake's hypocenter, rupture area and seismic asperity. Following the work on compiling a dual-wavelength subduction interface roughness (SubRough) database, we used this roughness proxy for a global comparison with large subduction earthquakes (MW > 7.5), which occurred since 1900 (SubQuake, new catalogue). We made a quantitative comparison between the earthquake data on the landward side of the trench and the roughness proxy on the seaward side, taking into account the most appropriate direction of roughness extrapolation. Main results show that areas with low roughness at long wavelengths (i.e. 80-100 km) are more prone to host large- to mega-earthquakes. In addition to this natural data study, we perform analogue experiments, which allow us to investigate the role subducting oceanic features play over the course of multiple seismic cycles. The experimental setup consists of a gelatin wedge and an underthrusting rigid aluminum plate (i.e. the analogues of the overriding and downgoing plates, respectively). By adding scaled 3D-printed topographic features (e.g. seamounts) on the downgoing plate, we are able to accurately monitor the initiation and propagation of ruptures with respect to the subducting features. Here we show the results of our natural data study, some preliminary results of the analogue models and our first conclusions on how the subduction interface roughness may influence the seismogenic potential of an area.
A long-term earthquake rate model for the central and eastern United States from smoothed seismicity
Moschetti, Morgan P.
2015-01-01
I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.
Numerical Modeling and Forecasting of Strong Sumatra Earthquakes
NASA Astrophysics Data System (ADS)
Xing, H. L.; Yin, C.
2007-12-01
ESyS-Crustal, a finite element based computational model and software has been developed and applied to simulate the complex nonlinear interacting fault systems with the goal to accurately predict earthquakes and tsunami generation. With the available tectonic setting and GPS data around the Sumatra region, the simulation results using the developed software have clearly indicated that the shallow part of the subduction zone in the Sumatra region between latitude 6S and 2N has been locked for a long time, and remained locked even after the Northern part of the zone underwent a major slip event resulting into the infamous Boxing Day tsunami. Two strong earthquakes that occurred in the distant past in this region (between 6S and 1S) in 1797 (M8.2) and 1833 (M9.0) respectively are indicative of the high potential for very large destructive earthquakes to occur in this region with relatively long periods of quiescence in between. The results have been presented in the 5th ACES International Workshop in 2006 before the recent 2007 Sumatra earthquakes occurred which exactly fell into the predicted zone (see the following web site for ACES2006 and detailed presentation file through workshop agenda). The preliminary simulation results obtained so far have shown that there seem to be a few obvious events around the previously locked zone before it is totally ruptured, but apparently no indication of a giant earthquake similar to the 2004 M9 event in the near future which is believed to happen by several earthquake scientists. Further detailed simulations will be carried out and presented in the meeting.
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes
2017-04-01
In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite
Modeling temporal changes of low-frequency earthquake bursts near Parkfield, CA
NASA Astrophysics Data System (ADS)
Wu, C.; Daub, E. G.
2016-12-01
Tectonic tremor and low-frequency earthquakes (LFE) are found in the deeper crust of various tectonic environments in the last decade. LFEs are presumed to be caused by failure of deep fault patches during a slow slip event, and the long-term variation in LFE recurrence could provide crucial insight into the deep fault zone processes that may lead to future large earthquakes. However, the physical mechanisms causing the temporal changes of LFE recurrence are still under debate. In this study, we combine observations of long-term changes in LFE burst activities near Parkfield, CA with a brittle and ductile friction (BDF) model, and use the model to constrain the possible physical mechanisms causing the observed long-term changes in LFE burst activities after the 2004 M6 Parkfield earthquake. The BDF model mimics the slipping of deep fault patches by a spring-drugged block slider with both brittle and ductile friction components. We use the BDF model to test possible mechanisms including static stress imposed by the Parkfield earthquake, changes in pore pressure, tectonic force, afterslip, brittle friction strength, and brittle contact failure distance. The simulation results suggest that changes in brittle friction strength and failure distance are more likely to cause the observed changes in LFE bursts than other mechanisms.
A finite difference method for off-fault plasticity throughout the earthquake cycle
NASA Astrophysics Data System (ADS)
Erickson, Brittany A.; Dunham, Eric M.; Khosravifar, Arash
2017-12-01
We have developed an efficient computational framework for simulating multiple earthquake cycles with off-fault plasticity. The method is developed for the classical antiplane problem of a vertical strike-slip fault governed by rate-and-state friction, with inertial effects captured through the radiation-damping approximation. Both rate-independent plasticity and viscoplasticity are considered, where stresses are constrained by a Drucker-Prager yield condition. The off-fault volume is discretized using finite differences and tectonic loading is imposed by displacing the remote side boundaries at a constant rate. Time-stepping combines an adaptive Runge-Kutta method with an incremental solution process which makes use of an elastoplastic tangent stiffness tensor and the return-mapping algorithm. Solutions are verified by convergence tests and comparison to a finite element solution. We quantify how viscosity, isotropic hardening, and cohesion affect the magnitude and off-fault extent of plastic strain that develops over many ruptures. If hardening is included, plastic strain saturates after the first event and the response during subsequent ruptures is effectively elastic. For viscoplasticity without hardening, however, successive ruptures continue to generate additional plastic strain. In all cases, coseismic slip in the shallow sub-surface is diminished compared to slip accumulated at depth during interseismic loading. The evolution of this slip deficit with each subsequent event, however, is dictated by the plasticity model. Integration of the off-fault plastic strain from the viscoplastic model reveals that a significant amount of tectonic offset is accommodated by inelastic deformation ( ∼ 0.1 m per rupture, or ∼ 10% of the tectonic deformation budget).
Slow-Slip Phenomena Represented by the One-Dimensional Burridge-Knopoff Model of Earthquakes
NASA Astrophysics Data System (ADS)
Kawamura, Hikaru; Yamamoto, Maho; Ueda, Yushi
2018-05-01
Slow-slip phenomena, including afterslips and silent earthquakes, are studied using a one-dimensional Burridge-Knopoff model that obeys the rate-and-state dependent friction law. By varying only a few model parameters, this simple model allows reproducing a variety of seismic slips within a single framework, including main shocks, precursory nucleation processes, afterslips, and silent earthquakes.
NASA Astrophysics Data System (ADS)
Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.
2004-12-01
The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better
Italian Case Studies Modelling Complex Earthquake Sources In PSHA
NASA Astrophysics Data System (ADS)
Gee, Robin; Peruzza, Laura; Pagani, Marco
2017-04-01
This study presents two examples of modelling complex seismic sources in Italy, done in the framework of regional probabilistic seismic hazard assessment (PSHA). The first case study is for an area centred around Collalto Stoccaggio, a natural gas storage facility in Northern Italy, located within a system of potentially seismogenic thrust faults in the Venetian Plain. The storage exploits a depleted natural gas reservoir located within an actively growing anticline, which is likely driven by the Montello Fault, the underlying blind thrust. This fault has been well identified by microseismic activity (M<2) detected by a local seismometric network installed in 2012 (http://rete-collalto.crs.inogs.it/). At this time, no correlation can be identified between the gas storage activity and local seismicity, so we proceed with a PSHA that considers only natural seismicity, where the rates of earthquakes are assumed to be time-independent. The source model consists of faults and distributed seismicity to consider earthquakes that cannot be associated to specific structures. All potentially active faults within 50 km of the site are considered, and are modelled as 3D listric surfaces, consistent with the proposed geometry of the Montello Fault. Slip rates are constrained using available geological, geophysical and seismological information. We explore the sensitivity of the hazard results to various parameters affected by epistemic uncertainty, such as ground motions prediction equations with different rupture-to-site distance metrics, fault geometry, and maximum magnitude. The second case is an innovative study, where we perform aftershock probabilistic seismic hazard assessment (APSHA) in Central Italy, following the Amatrice M6.1 earthquake of August 24th, 2016 (298 casualties) and the subsequent earthquakes of Oct 26th and 30th (M6.1 and M6.6 respectively, no deaths). The aftershock hazard is modelled using a fault source with complex geometry, based on literature data
Evaluation of CAMEL - comprehensive areal model of earthquake-induced landslides
Miles, S.B.; Keefer, D.K.
2009-01-01
A new comprehensive areal model of earthquake-induced landslides (CAMEL) has been developed to assist in planning decisions related to disaster risk reduction. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using fuzzy logic systems and geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL has been empirically evaluated with respect to disrupted landslides (Category I) using a case study of the 1989 M = 6.9 Loma Prieta, CA earthquake. In this case, CAMEL performs best in comparison to disrupted slides and falls in soil. For disrupted rock fall and slides, CAMEL's performance was slightly poorer. The model predicted a low occurrence of rock avalanches, when none in fact occurred. A similar comparison with the Loma Prieta case study was also conducted using a simplified Newmark displacement model. The area under the curve method of evaluation was used in order to draw comparisons between both models, revealing improved performance with CAMEL. CAMEL should not however be viewed as a strict alternative to Newmark displacement models. CAMEL can be used to integrate Newmark displacements with other, previously incompatible, types of knowledge. ?? 2008 Elsevier B.V.
The HayWired Earthquake Scenario—Earthquake Hazards
Detweiler, Shane T.; Wein, Anne M.
2017-04-24
The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of
Atmospheric Signals Associated with Major Earthquakes. A Multi-Sensor Approach. Chapter 9
NASA Technical Reports Server (NTRS)
Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Kafatos, Menas; Taylor, Patrick
2011-01-01
We are studying the possibility of a connection between atmospheric observation recorded by several ground and satellites as earthquakes precursors. Our main goal is to search for the existence and cause of physical phenomenon related to prior earthquake activity and to gain a better understanding of the physics of earthquake and earthquake cycles. The recent catastrophic earthquake in Japan in March 2011 has provided a renewed interest in the important question of the existence of precursory signals preceding strong earthquakes. We will demonstrate our approach based on integration and analysis of several atmospheric and environmental parameters that were found associated with earthquakes. These observations include: thermal infrared radiation, radon! ion activities; air temperature and humidity and a concentration of electrons in the ionosphere. We describe a possible physical link between atmospheric observations with earthquake precursors using the latest Lithosphere-Atmosphere-Ionosphere Coupling model, one of several paradigms used to explain our observations. Initial results for the period of2003-2009 are presented from our systematic hind-cast validation studies. We present our findings of multi-sensor atmospheric precursory signals for two major earthquakes in Japan, M6.7 Niigata-ken Chuetsu-oki of July16, 2007 and the latest M9.0 great Tohoku earthquakes of March 11,2011
PAGER--Rapid assessment of an earthquake?s impact
Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.
2010-01-01
PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.
Time‐dependent renewal‐model probabilities when date of last earthquake is unknown
Field, Edward H.; Jordan, Thomas H.
2015-01-01
We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.
Coseismic deformation observed with radar interferometry: Great earthquakes and atmospheric noise
NASA Astrophysics Data System (ADS)
Scott, Chelsea Phipps
Spatially dense maps of coseismic deformation derived from Interferometric Synthetic Aperture Radar (InSAR) datasets result in valuable constraints on earthquake processes. The recent increase in the quantity of observations of coseismic deformation facilitates the examination of signals in many tectonic environments associated with earthquakes of varying magnitude. Efforts to place robust constraints on the evolution of the crustal stress field following great earthquakes often rely on knowledge of the earthquake location, the fault geometry, and the distribution of slip along the fault plane. Well-characterized uncertainties and biases strengthen the quality of inferred earthquake source parameters, particularly when the associated ground displacement signals are near the detection limit. Well-preserved geomorphic records of earthquakes offer additional insight into the mechanical behavior of the shallow crust and the kinematics of plate boundary systems. Together, geodetic and geologic observations of crustal deformation offer insight into the processes that drive seismic cycle deformation over a range of timescales. In this thesis, I examine several challenges associated with the inversion of earthquake source parameters from SAR data. Variations in atmospheric humidity, temperature, and pressure at the timing of SAR acquisitions result in spatially correlated phase delays that are challenging to distinguish from signals of real ground deformation. I characterize the impact of atmospheric noise on inferred earthquake source parameters following elevation-dependent atmospheric corrections. I analyze the spatial and temporal variations in the statistics of atmospheric noise from both reanalysis weather models and InSAR data itself. Using statistics that reflect the spatial heterogeneity of atmospheric characteristics, I examine parameter errors for several synthetic cases of fault slip on a basin-bounding normal fault. I show a decrease in uncertainty in fault
NASA Astrophysics Data System (ADS)
Lu, Kunquan; Hou, Meiying; Jiang, Zehui; Wang, Qiang; Sun, Gang; Liu, Jixing
2018-03-01
We treat the earth crust and mantle as large scale discrete matters based on the principles of granular physics and existing experimental observations. Main outcomes are: A granular model of the structure and movement of the earth crust and mantle is established. The formation mechanism of the tectonic forces, which causes the earthquake, and a model of propagation for precursory information are proposed. Properties of the seismic precursory information and its relevance with the earthquake occurrence are illustrated, and principle of ways to detect the effective seismic precursor is elaborated. The mechanism of deep-focus earthquake is also explained by the jamming-unjamming transition of the granular flow. Some earthquake phenomena which were previously difficult to understand are explained, and the predictability of the earthquake is discussed. Due to the discrete nature of the earth crust and mantle, the continuum theory no longer applies during the quasi-static seismological process. In this paper, based on the principles of granular physics, we study the causes of earthquakes, earthquake precursors and predictions, and a new understanding, different from the traditional seismological viewpoint, is obtained.
NASA Astrophysics Data System (ADS)
Gok, R.; Hutchings, L.
2004-05-01
We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.
Application of a time-magnitude prediction model for earthquakes
NASA Astrophysics Data System (ADS)
An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He
2007-06-01
In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.
NASA Astrophysics Data System (ADS)
Shaochuan, Lu; Vere-Jones, David
2011-10-01
The paper studies the statistical properties of deep earthquakes around North Island, New Zealand. We first evaluate the catalogue coverage and completeness of deep events according to cusum (cumulative sum) statistics and earlier literature. The epicentral, depth, and magnitude distributions of deep earthquakes are then discussed. It is worth noting that strong grouping effects are observed in the epicentral distribution of these deep earthquakes. Also, although the spatial distribution of deep earthquakes does not change, their occurrence frequencies vary from time to time, active in one period, relatively quiescent in another. The depth distribution of deep earthquakes also hardly changes except for events with focal depth less than 100 km. On the basis of spatial concentration we partition deep earthquakes into several groups—the Taupo-Bay of Plenty group, the Taranaki group, and the Cook Strait group. Second-order moment analysis via the two-point correlation function reveals only very small-scale clustering of deep earthquakes, presumably limited to some hot spots only. We also suggest that some models usually used for shallow earthquakes fit deep earthquakes unsatisfactorily. Instead, we propose a switching Poisson model for the occurrence patterns of deep earthquakes. The goodness-of-fit test suggests that the time-varying activity is well characterized by a switching Poisson model. Furthermore, detailed analysis carried out on each deep group by use of switching Poisson models reveals similar time-varying behavior in occurrence frequencies in each group.
Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zöller, G.
2012-04-01
As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).
NASA Astrophysics Data System (ADS)
Gable, C. W.; Fialko, Y.; Hager, B. H.; Plesch, A.; Williams, C. A.
2006-12-01
More realistic models of crustal deformation are possible due to advances in measurements and modeling capabilities. This study integrates various data to constrain a finite element model of stress and strain in the vicinity of the 1992 Landers earthquake and the 1999 Hector Mine earthquake. The geometry of the model is designed to incorporate the Southern California Earthquake Center (SCEC), Community Fault Model (CFM) to define fault geometry. The Hector Mine fault is represented by a single surface that follows the trace of the Hector Mine fault, is vertical and has variable depth. The fault associated with the Landers earthquake is a set of seven surfaces that capture the geometry of the splays and echelon offsets of the fault. A three dimensional finite element mesh of tetrahedral elements is built that closely maintains the geometry of these fault surfaces. The spatially variable coseismic slip on faults is prescribed based on an inversion of geodetic (Synthetic Aperture Radar and Global Positioning System) data. Time integration of stress and strain is modeled with the finite element code Pylith. As a first step the methodology of incorporating all these data is described. Results of the time history of the stress and strain transfer between 1992 and 1999 are analyzed as well as the time history of deformation from 1999 to the present.
Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake
NASA Astrophysics Data System (ADS)
Durukal, E.; Sesetyan, K.; Erdik, M.
2009-04-01
The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing
M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model
Parsons, Thomas E.
2006-01-01
Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.
Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand
NASA Astrophysics Data System (ADS)
Francois-Holden, C.; Zhao, J.
2012-12-01
The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground
Combined GPS and InSAR models of postseismic deformation from the Northridge Earthquake
NASA Technical Reports Server (NTRS)
Donnellan, A.; Parker, J. W.; Peltzer, G.
2002-01-01
Models of combined Global Positioning System and Interferometric Synthetic Aperture Radar data collected in the region of the Northridge earthquake indicate that significant afterslip on the main fault occurred following the earthquake.
Spatial modeling for estimation of earthquakes economic loss in West Java
NASA Astrophysics Data System (ADS)
Retnowati, Dyah Ayu; Meilano, Irwan; Riqqi, Akhmad; Hanifa, Nuraini Rahma
2017-07-01
Indonesia has a high vulnerability towards earthquakes. The low adaptive capacity could make the earthquake become disaster that should be concerned. That is why risk management should be applied to reduce the impacts, such as estimating the economic loss caused by hazard. The study area of this research is West Java. The main reason of West Java being vulnerable toward earthquake is the existence of active faults. These active faults are Lembang Fault, Cimandiri Fault, Baribis Fault, and also Megathrust subduction zone. This research tries to estimates the value of earthquakes economic loss from some sources in West Java. The economic loss is calculated by using HAZUS method. The components that should be known are hazard (earthquakes), exposure (building), and the vulnerability. Spatial modeling is aimed to build the exposure data and make user get the information easier by showing the distribution map, not only in tabular data. As the result, West Java could have economic loss up to 1,925,122,301,868,140 IDR ± 364,683,058,851,703.00 IDR, which is estimated from six earthquake sources with maximum possibly magnitude. However, the estimation of economic loss value in this research is the worst case earthquakes occurrence which is probably over-estimated.
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for
NASA Astrophysics Data System (ADS)
Kaneko, Yoshihiro; Wallace, Laura M.; Hamling, Ian J.; Gerstenberger, Matthew C.
2018-05-01
Slow slip events (SSEs) have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. Here we develop a relatively simple, simulation-based method for estimating the probability of megathrust earthquakes following tectonic events that induce any transient stress perturbations. This method has been applied to the locked Hikurangi megathrust (New Zealand) surrounded on all sides by the 2016 Kaikoura earthquake and SSEs. Our models indicate the annual probability of a M≥7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3-18 times relative to the pre-Kaikoura probability, and the absolute probability is in the range of 0.6-7%. We find that probabilities of a large earthquake are mainly controlled by the ratio of the total stressing rate induced by all nearby tectonic sources to the mean stress drop of earthquakes. Our method can be applied to evaluate the potential for triggering a megathrust earthquake following SSEs in other subduction zones.
An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling
Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.
2009-01-01
We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global
An Earthquake Rupture Forecast model for central Italy submitted to CSEP project
NASA Astrophysics Data System (ADS)
Pace, B.; Peruzza, L.
2009-04-01
We defined a seismogenic source model for central Italy and computed the relative forecast scenario, in order to submit the results to the CSEP (Collaboratory for the study of Earthquake Predictability, www.cseptesting.org) project. The goal of CSEP project is developing a virtual, distributed laboratory that supports a wide range of scientific prediction experiments in multiple regional or global natural laboratories, and Italy is the first region in Europe for which fully prospective testing is planned. The model we propose is essentially the Layered Seismogenic Source for Central Italy (LaSS-CI) we published in 2006 (Pace et al., 2006). It is based on three different layers of sources: the first one collects the individual faults liable to generate major earthquakes (M >5.5); the second layer is given by the instrumental seismicity analysis of the past two decades, which allows us to evaluate the background seismicity (M ~<5.0). The third layer utilizes all the instrumental earthquakes and the historical events not correlated to known structures (4.5
Optimized volume models of earthquake-triggered landslides
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-01-01
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212
Optimized volume models of earthquake-triggered landslides.
Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang
2016-07-12
In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.
NASA Astrophysics Data System (ADS)
Hayes, G. P.; Plescia, S. M.; Moore, G.
2017-12-01
The U.S. Geological Survey National Earthquake Information Center has recently published a database of finite fault models for globally distributed M7.5+ earthquakes since 1990. Concurrently, we have also compiled a database of three-dimensional slab geometry models for all global subduction zones, to update and replace Slab1.0. Here, we use these two new and valuable resources to infer characteristics of earthquake rupture and propagation in subduction zones, where the vast majority of large-to-great-sized earthquakes occur. For example, we can test questions that are fairly prevalent in seismological literature. Do large ruptures preferentially occur where subduction zones are flat (e.g., Bletery et al., 2016)? Can `flatness' be mapped to understand and quantify earthquake potential? Do the ends of ruptures correlate with significant changes in slab geometry, and/or bathymetric features entering the subduction zone? Do local subduction zone geometry changes spatially correlate with areas of low slip in rupture models (e.g., Moreno et al., 2012)? Is there a correlation between average seismogenic zone dip, and/or seismogenic zone width, and earthquake size? (e.g., Hayes et al., 2012; Heuret et al., 2011). These issues are fundamental to the understanding of earthquake rupture dynamics and subduction zone seismogenesis, and yet many are poorly understood or are still debated in scientific literature. We attempt to address these questions and similar issues in this presentation, and show how these models can be used to improve our understanding of earthquake hazard in subduction zones.
A combined geodetic and seismic model for the Mw 8.3 2015 Illapel (Chile) earthquake
NASA Astrophysics Data System (ADS)
Simons, M.; Duputel, Z.; Jiang, J.; Liang, C.; Fielding, E. J.; Agram, P. S.; Owen, S. E.; Moore, A. W.; Kanamori, H.; Rivera, L. A.; Riel, B. V.; Ortega, F.
2015-12-01
The 2015 September 16 Mw 8.3 Illapel earthquake occurred on the subduction megathrust offshore of the Chilean coastline between the towns of Valparaiso and Coquimbo. This earthquake is the 3rdevent with Mw>8 to impact coastal Chile in the last 6 years. It occurred north of both the 2010 Mw 8.8 Maule earthquake and the 1985 Mw 8.0 Valparaiso earthquake. While the location of the 2015 earthquake is close to the inferred location of a large earthquake in 1943, comparison of seismograms from the two earthquakes suggests the recent event is not clearly a repeat of the 1943 event. To gain a better understanding of the distribution of coseismic fault slip, we develop a finite fault model that is constrained by a combination of static GPS offsets, Sentinel 1a ascending and descending radar interferograms, tsunami waveform measurements made at selected DART buoys, high rate (1 sample/sec) GPS waveforms and strong motion seismic data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the assumed forward models. At the inherent resolution of the model, the posterior ensemble of purely static models (without using high rate GPS and strong motion data) is characterized by a distribution of slip that reaches as much as 10 m in localized regions, with significant slip apparently reaching the trench or at least very close to the trench. Based on our W-phase point-source estimate, the event duration is approximately 1.7 minutes. We also present a joint kinematic model and describe the relationship of the coseismic model to the spatial distribution of aftershocks and post-seismic slip.
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
NASA Astrophysics Data System (ADS)
Zhang, Huai; Zhang, Zhen; Wang, Liangshu; Leroy, Yves; shi, Yaolin
2017-04-01
How to reconcile continent megathrust earthquake characteristics, for instances, mapping the large-great earthquake sequences into geological mountain building process, as well as partitioning the seismic-aseismic slips, is fundamental and unclear. Here, we scope these issues by focusing a typical continental collisional belt, the great Nepal Himalaya. We first prove that refined Nepal Himalaya thrusting sequences, with accurately defining of large earthquake cycle scale, provide new geodynamical hints on long-term earthquake potential in association with, either seismic-aseismic slip partition up to the interpretation of the binary interseismic coupling pattern on the Main Himalayan Thrust (MHT), or the large-great earthquake classification via seismic cycle patterns on MHT. Subsequently, sequential limit analysis is adopted to retrieve the detailed thrusting sequences of Nepal Himalaya mountain wedge. Our model results exhibit apparent thrusting concentration phenomenon with four thrusting clusters, entitled as thrusting 'families', to facilitate the development of sub-structural regions respectively. Within the hinterland thrusting family, the total aseismic shortening and the corresponding spatio-temporal release pattern are revealed by mapping projection. Whereas, in the other three families, mapping projection delivers long-term large (M<8)-great (M>8) earthquake recurrence information, including total lifespans, frequencies and large-great earthquake alternation information by identifying rupture distances along the MHT. In addition, this partition has universality in continental-continental collisional orogenic belt with identified interseismic coupling pattern, while not applicable in continental-oceanic megathrust context.
Earthquake Triggering in the September 2017 Mexican Earthquake Sequence
NASA Astrophysics Data System (ADS)
Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.
2017-12-01
Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress
NASA Astrophysics Data System (ADS)
di Giovambattista, R.; Tyupkin, Yu
The cyclic migration of weak earthquakes (M 2.2) which occurred during the yearprior to the October 15, 1996 (M = 4.9) Reggio Emilia earthquake isdiscussed in this paper. The onset of this migration was associated with theoccurrence of the October 10, 1995 (M = 4.8) Lunigiana earthquakeabout 90 km southwest from the epicenter of the Reggio Emiliaearthquake. At least three series of earthquakes migrating from theepicentral area of the Lunigiana earthquake in the northeast direction wereobserved. The migration of earthquakes of the first series terminated at adistance of about 30 km from the epicenter of the Reggio Emiliaearthquake. The earthquake migration of the other two series halted atabout 10 km from the Reggio Emilia epicenter. The average rate ofearthquake migration was about 200-300 km/year, while the time ofrecurrence of the observed cycles varied from 68 to 178 days. Weakearthquakes migrated along the transversal fault zones and sometimesjumped from one fault to another. A correlation between the migratingearthquakes and tidal variations is analysed. We discuss the hypothesis thatthe analyzed area is in a state of stress approaching the limit of thelong-term durability of crustal rocks and that the observed cyclic migrationis a result of a combination of a more or less regular evolution of tectonicand tidal variations.
Concerns over modeling and warning capabilities in wake of Tohoku Earthquake and Tsunami
NASA Astrophysics Data System (ADS)
Showstack, Randy
2011-04-01
Improved earthquake models, better tsunami modeling and warning capabilities, and a review of nuclear power plant safety are all greatly needed following the 11 March Tohoku earthquake and tsunami, according to scientists at the European Geosciences Union's (EGU) General Assembly, held 3-8 April in Vienna, Austria. EGU quickly organized a morning session of oral presentations and an afternoon panel discussion less than 1 month after the earthquake and the tsunami and the resulting crisis at Japan's Fukushima nuclear power plant, which has now been identified as having reached the same level of severity as the 1986 Chernobyl disaster. Many of the scientists at the EGU sessions expressed concern about the inability to have anticipated the size of the earthquake and the resulting tsunami, which appears likely to have caused most of the fatalities and damage, including damage to the nuclear plant.
NASA Astrophysics Data System (ADS)
Christos, Kourouklas; Eleftheria, Papadimitriou; George, Tsaklidis; Vassilios, Karakostas
2018-06-01
The determination of strong earthquakes' recurrence time above a predefined magnitude, associated with specific fault segments, is an important component of seismic hazard assessment. The occurrence of these earthquakes is neither periodic nor completely random but often clustered in time. This fact in connection with their limited number, due to shortage of the available catalogs, inhibits a deterministic approach for recurrence time calculation, and for this reason, application of stochastic processes is required. In this study, recurrence time determination in the area of North Aegean Trough (NAT) is developed by the application of time-dependent stochastic models, introducing an elastic rebound motivated concept for individual fault segments located in the study area. For this purpose, all the available information on strong earthquakes (historical and instrumental) with M w ≥ 6.5 is compiled and examined for magnitude completeness. Two possible starting dates of the catalog are assumed with the same magnitude threshold, M w ≥ 6.5 and divided into five data sets, according to a new segmentation model for the study area. Three Brownian Passage Time (BPT) models with different levels of aperiodicity are applied and evaluated with the Anderson-Darling test for each segment in both catalog data where possible. The preferable models are then used in order to estimate the occurrence probabilities of M w ≥ 6.5 shocks on each segment of NAT for the next 10, 20, and 30 years since 01/01/2016. Uncertainties in probability calculations are also estimated using a Monte Carlo procedure. It must be mentioned that the provided results should be treated carefully because of their dependence to the initial assumptions. Such assumptions exhibit large variability and alternative means of these may return different final results.
NASA Astrophysics Data System (ADS)
Stavrianaki, K.; Ross, G.; Sammonds, P. R.
2015-12-01
The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.
Next-Day Earthquake Forecasts for California
NASA Astrophysics Data System (ADS)
Werner, M. J.; Jackson, D. D.; Kagan, Y. Y.
2008-12-01
We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquake predictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of Earthquake Predictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.
Development of Final A-Fault Rupture Models for WGCEP/ NSHMP Earthquake Rate Model 2
Field, Edward H.; Weldon, Ray J.; Parsons, Thomas; Wills, Chris J.; Dawson, Timothy E.; Stein, Ross S.; Petersen, Mark D.
2008-01-01
This appendix discusses how we compute the magnitude and rate of earthquake ruptures for the seven Type-A faults (Elsinore, Garlock, San Jacinto, S. San Andreas, N. San Andreas, Hayward-Rodgers Creek, and Calaveras) in the WGCEP/NSHMP Earthquake Rate Model 2 (referred to as ERM 2. hereafter). By definition, Type-A faults are those that have relatively abundant paleoseismic information (e.g., mean recurrence-interval estimates). The first section below discusses segmentation-based models, where ruptures are assumed be confined to one or more identifiable segments. The second section discusses an un-segmented-model option, the third section discusses results and implications, and we end with a discussion of possible future improvements. General background information can be found in the main report.
Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.
2015-12-01
The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.
REGIONAL SEISMIC AMPLITUDE MODELING AND TOMOGRAPHY FOR EARTHQUAKE-EXPLOSION DISCRIMINATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, W R; Pasyanos, M E; Matzel, E
2008-07-08
We continue exploring methodologies to improve earthquake-explosion discrimination using regional amplitude ratios such as P/S in a variety of frequency bands. Empirically we demonstrate that such ratios separate explosions from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are also examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling (e. g. Ford et al 2008). For example, regional waveform modeling showsmore » strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East. Monitoring the world for potential nuclear explosions requires characterizing
Dynamic models of an earthquake and tsunami offshore Ventura, California
Kenny J. Ryan,; Geist, Eric L.; Barall, Michael; David D. Oglesby,
2015-01-01
The Ventura basin in Southern California includes coastal dip-slip faults that can likely produce earthquakes of magnitude 7 or greater and significant local tsunamis. We construct a 3-D dynamic rupture model of an earthquake on the Pitas Point and Lower Red Mountain faults to model low-frequency ground motion and the resulting tsunami, with a goal of elucidating the seismic and tsunami hazard in this area. Our model results in an average stress drop of 6 MPa, an average fault slip of 7.4 m, and a moment magnitude of 7.7, consistent with regional paleoseismic data. Our corresponding tsunami model uses final seafloor displacement from the rupture model as initial conditions to compute local propagation and inundation, resulting in large peak tsunami amplitudes northward and eastward due to site and path effects. Modeled inundation in the Ventura area is significantly greater than that indicated by state of California's current reference inundation line.
Stein, Ross S.
2008-01-01
The Working Group for California Earthquake Probabilities must transform fault lengths and their slip rates into earthquake moment-magnitudes. First, the down-dip coseismic fault dimension, W, must be inferred. We have chosen the Nazareth and Hauksson (2004) method, which uses the depth above which 99% of the background seismicity occurs to assign W. The product of the observed or inferred fault length, L, with the down-dip dimension, W, gives the fault area, A. We must then use a scaling relation to relate A to moment-magnitude, Mw. We assigned equal weight to the Ellsworth B (Working Group on California Earthquake Probabilities, 2003) and Hanks and Bakun (2007) equations. The former uses a single logarithmic relation fitted to the M=6.5 portion of data of Wells and Coppersmith (1994); the latter uses a bilinear relation with a slope change at M=6.65 (A=537 km2) and also was tested against a greatly expanded dataset for large continental transform earthquakes. We also present an alternative power law relation, which fits the newly expanded Hanks and Bakun (2007) data best, and captures the change in slope that Hanks and Bakun attribute to a transition from area- to length-scaling of earthquake slip. We have not opted to use the alternative relation for the current model. The selections and weights were developed by unanimous consensus of the Executive Committee of the Working Group, following an open meeting of scientists, a solicitation of outside opinions from additional scientists, and presentation of our approach to the Scientific Review Panel. The magnitude-area relations and their assigned weights are unchanged from that used in Working Group (2003).
The Global Earthquake Model and Disaster Risk Reduction
NASA Astrophysics Data System (ADS)
Smolka, A. J.
2015-12-01
Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all
Nitsche Extended Finite Element Methods for Earthquake Simulation
NASA Astrophysics Data System (ADS)
Coon, Ethan T.
propagation and material failure. While some theory and application of these methods exist, implementations for the simulation of networks of many cracks have not yet been considered. For my thesis, I implement and extend one such method, the eXtended Finite Element Method (XFEM), for use in static and dynamic models of fault networks. Once this machinery is developed, it is applied to open questions regarding the behavior of networks of faults, including questions of distributed deformation in fault systems and ensembles of magnitude, location, and frequency in repeat ruptures. The theory of XFEM is augmented to allow for solution of problems with alternating regimes of static solves for elastic stress conditions and short, dynamic earthquakes on networks of faults. This is accomplished using Nitsche's approach for implementing boundary conditions. Finally, an optimization problem is developed to determine tractions along the fault, enabling the calculation of frictional constraints and the rupture front. This method is verified via a series of static, quasistatic, and dynamic problems. Armed with this technique, we look at several problems regarding geometry within the earthquake cycle in which geometry is crucial. We first look at quasistatic simulations on a community fault model of Southern California, and model slip distribution across that system. We find the distribution of deformation across faults compares reasonably well with slip rates across the region, as constrained by geologic data. We find geometry can provide constraints for friction, and consider the minimization of shear strain across the zone as a function of friction and plate loading direction, and infer bounds on fault strength in the region. Then we consider the repeated rupture problem, modeling the full earthquake cycle over the course of many events on several fault geometries. In this work, we look at distributions of events, studying the effect of geometry on statistical metrics of event
NASA Astrophysics Data System (ADS)
Hirata, N.; Tsuruoka, H.; Yokoi, S.
2011-12-01
The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.
NASA Astrophysics Data System (ADS)
Hirata, N.; Tsuruoka, H.; Yokoi, S.
2013-12-01
The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.
NASA Astrophysics Data System (ADS)
de Ruiter, Marleen; Ward, Philip; Daniell, James; Aerts, Jeroen
2017-04-01
In a cross-discipline study, an extensive literature review has been conducted to increase the understanding of vulnerability indicators used in both earthquake- and flood vulnerability assessments, and to provide insights into potential improvements of earthquake and flood vulnerability assessments. It identifies and compares indicators used to quantitatively assess earthquake and flood vulnerability, and discusses their respective differences and similarities. Indicators have been categorized into Physical- and Social categories, and further subdivided into (when possible) measurable and comparable indicators. Physical vulnerability indicators have been differentiated to exposed assets such as buildings and infrastructure. Social indicators are grouped in subcategories such as demographics, economics and awareness. Next, two different vulnerability model types have been described that use these indicators: index- and curve-based vulnerability models. A selection of these models (e.g. HAZUS) have been described, and compared on several characteristics such as temporal- and spatial aspects. It appears that earthquake vulnerability methods are traditionally strongly developed towards physical attributes at an object scale and used in vulnerability curve models, whereas flood vulnerability studies focus more on indicators applied to aggregated land-use scales. Flood risk studies could be improved using approaches from earthquake studies, such as incorporating more detailed lifeline and building indicators, and developing object-based vulnerability curve assessments of physical vulnerability, for example by defining building material based flood vulnerability curves. Related to this, is the incorporation of time of the day based building occupation patterns (at 2am most people will be at home while at 2pm most people will be in the office). Earthquake assessments could learn from flood studies when it comes to the refined selection of social vulnerability indicators
NASA Astrophysics Data System (ADS)
Neighbors, C.; Noriega, G. R.; Caras, Y.; Cochran, E. S.
2010-12-01
HAZUS-MH MR4 (HAZards U. S. Multi-Hazard Maintenance Release 4) is a risk-estimation software developed by FEMA to calculate potential losses due to natural disasters. Federal, state, regional, and local government use the HAZUS-MH Earthquake Model for earthquake risk mitigation, preparedness, response, and recovery planning (FEMA, 2003). In this study, we examine several parameters used by the HAZUS-MH Earthquake Model methodology to understand how modifying the user-defined settings affect ground motion analysis, seismic risk assessment and earthquake loss estimates. This analysis focuses on both shallow crustal and deep intraslab events in the American Pacific Northwest. Specifically, the historic 1949 Mw 6.8 Olympia, 1965 Mw 6.6 Seattle-Tacoma and 2001 Mw 6.8 Nisqually normal fault intraslab events and scenario large-magnitude Seattle reverse fault crustal events are modeled. Inputs analyzed include variations of deterministic event scenarios combined with hazard maps and USGS ShakeMaps. This approach utilizes the capacity of the HAZUS-MH Earthquake Model to define landslide- and liquefaction- susceptibility hazards with local groundwater level and slope stability information. Where Shakemap inputs are not used, events are run in combination with NEHRP soil classifications to determine site amplification effects. The earthquake component of HAZUS-MH applies a series of empirical ground motion attenuation relationships developed from source parameters of both regional and global historical earthquakes to estimate strong ground motion. Ground motion and resulting ground failure due to earthquakes are then used to calculate, direct physical damage for general building stock, essential facilities, and lifelines, including transportation systems and utility systems. Earthquake losses are expressed in structural, economic and social terms. Where available, comparisons between recorded earthquake losses and HAZUS-MH earthquake losses are used to determine how region
Parsons, Thomas E.; Geist, Eric L.
2009-01-01
The idea that faults rupture in repeated, characteristic earthquakes is central to most probabilistic earthquake forecasts. The concept is elegant in its simplicity, and if the same event has repeated itself multiple times in the past, we might anticipate the next. In practice however, assembling a fault-segmented characteristic earthquake rupture model can grow into a complex task laden with unquantified uncertainty. We weigh the evidence that supports characteristic earthquakes against a potentially simpler model made from extrapolation of a Gutenberg–Richter magnitude-frequency law to individual fault zones. We find that the Gutenberg–Richter model satisfies key data constraints used for earthquake forecasting equally well as a characteristic model. Therefore, judicious use of instrumental and historical earthquake catalogs enables large-earthquake-rate calculations with quantifiable uncertainty that should get at least equal weighting in probabilistic forecasting.
NASA Astrophysics Data System (ADS)
Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.
2011-09-01
We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.
Transportations Systems Modeling and Applications in Earthquake Engineering
2010-07-01
49 Figure 6 PGA map of a M7.7 earthquake on all three New Madrid fault segments (g)............... 50...Memphis, Tennessee. The NMSZ was responsible for the devastating 1811-1812 New Madrid earthquakes , the largest earthquakes ever recorded in the...Figure 6 PGA map of a M7.7 earthquake on all three New Madrid fault segments (g) Table 1 Fragility parameters for MSC steel bridge (Padgett 2007
NASA Astrophysics Data System (ADS)
Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Morikawa, N.; Kawai, S.; Ohsumi, T.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.
2015-12-01
The Earthquake Research Committee(ERC)/HERP, Government of Japan (2013) revised their long-term evaluation of the forthcoming large earthquake along the Nankai Trough; the next earthquake is estimated M8 to 9 class, and the probability (P30) that the next earthquake will occur within the next 30 years (from Jan. 1, 2013) is 60% to 70%. In this study, we assess tsunami hazards (maximum coastal tsunami heights) in the near future, in terms of a probabilistic approach, from the next earthquake along Nankai Trough, on the basis of ERC(2013)'s report. The probabilistic tsunami hazard assessment that we applied is as follows; (1) Characterized earthquake fault models (CEFMs) are constructed on each of the 15 hypothetical source areas (HSA) that ERC(2013) showed. The characterization rule follows Toyama et al.(2015, JpGU). As results, we obtained total of 1441 CEFMs. (2) We calculate tsunamis due to CEFMs by solving nonlinear, finite-amplitude, long-wave equations with advection and bottom friction terms by finite-difference method. Run-up computation on land is included. (3) A time predictable model predicts the recurrent interval of the present seismic cycle is T=88.2 years (ERC,2013). We fix P30 = 67% by applying the renewal process based on BPT distribution with T and alpha=0.24 as its aperiodicity. (4) We divide the probability P30 into P30(i) for i-th subgroup consisting of the earthquakes occurring in each of 15 HSA by following a probability re-distribution concept (ERC,2014). Then each earthquake (CEFM) in i-th subgroup is assigned a probability P30(i)/N where N is the number of CEFMs in each sub-group. Note that such re-distribution concept of the probability is nothing but tentative because the present seismology cannot give deep knowledge enough to do it. Epistemic logic-tree approach may be required in future. (5) We synthesize a number of tsunami hazard curves at every evaluation points on coasts by integrating the information about 30 years occurrence
Ching, K.-E.; Rau, R.-J.; Zeng, Y.
2007-01-01
A coseismic source model of the 2003 Mw 6.8 Chengkung, Taiwan, earthquake was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation model. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic model tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip earthquakes and a compilation of finite source models of 18 earthquakes, the Chengkung earthquake generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung earthquake seems to be a transitional example between regular and slow slip earthquakes. The coseismic source model of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 earthquake was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.
Analogue modelling of the rupture process of vulnerable stalagmites in an earthquake simulator
NASA Astrophysics Data System (ADS)
Gribovszki, Katalin; Bokelmann, Götz; Kovács, Károly; Hegymegi, Erika; Esterhazy, Sofi; Mónus, Péter
2017-04-01
Earthquakes hit urban centers in Europe infrequently, but occasionally with disastrous effects. Obtaining an unbiased view of seismic hazard is therefore very important. In principle, the best way to test Probabilistic Seismic Hazard Assessments (PSHA) is to compare them with observations that are entirely independent of the procedure used to produce PSHA models. Arguably, the most valuable information in this context should be information on long-term hazard, namely maximum intensities (or magnitudes) occurring over time intervals that are at least as long as a seismic cycle. Long-term information can in principle be gained from intact and vulnerable stalagmites in natural caves. These formations survived all earthquakes that have occurred, over thousands of years - depending on the age of the stalagmite. Their "survival" requires that the horizontal ground acceleration has never exceeded a certain critical value within that time period. To determine this critical value for the horizontal ground acceleration more precisely we need to understand the failure process of these intact and vulnerable stalagmites. More detailed information of the vulnerable stalagmites' rupture is required, and we have to know how much it depends on the shape and the substance of the investigated stalagmite. Predicting stalagmite failure limits using numerical modelling is faced with a number of approximations, e.g. from generating a manageable digital model. Thus it seemed reasonable to investigate the problem by analogue modelling as well. The advantage of analogue modelling among other things is that nearly real circumstances can be produced by simple and quick laboratory methods. The model sample bodies were made from different types of concrete and were cut out from real broken stalagmites originated from the investigated caves. These bodies were reduced-scaled with similar shape as the original, investigated stalagmites. During the measurements we could change both the shape and
Catalog of Hawaiian earthquakes, 1823-1959
Klein, Fred W.; Wright, Thomas L.
2000-01-01
This catalog of more than 17,000 Hawaiian earthquakes (of magnitude greater than or equal to 5), principally located on the Island of Hawaii, from 1823 through the third quarter of 1959 is designed to expand our ability to evaluate seismic hazard in Hawaii, as well as our knowledge of Hawaiian seismic rhythms as they relate to eruption cycles at Kilauea and Mauna Loa volcanoes and to subcrustal earthquake patterns related to the tectonic evolution of the Hawaiian chain.
NASA Astrophysics Data System (ADS)
Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.
2012-12-01
Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP
Earthquake cycle simulations with rate-and-state friction and power-law viscoelasticity
NASA Astrophysics Data System (ADS)
Allison, Kali L.; Dunham, Eric M.
2018-05-01
We simulate earthquake cycles with rate-and-state fault friction and off-fault power-law viscoelasticity for the classic 2D antiplane shear problem of a vertical, strike-slip plate boundary fault. We investigate the interaction between fault slip and bulk viscous flow with experimentally-based flow laws for quartz-diorite and olivine for the crust and mantle, respectively. Simulations using three linear geotherms (dT/dz = 20, 25, and 30 K/km) produce different deformation styles at depth, ranging from significant interseismic fault creep to purely bulk viscous flow. However, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. Despite these similarities, variations in the predicted surface deformation might permit discrimination of the deformation mechanism using geodetic observations. Additionally, in the 25 and 30 K/km simulations, the crust drags the mantle; the 20 K/km simulation also predicts this, except within 10 km of the fault where the reverse occurs. However, basal tractions play a minor role in the overall force balance of the lithosphere, at least for the flow laws used in our study. Therefore, the depth-integrated stress on the fault is balanced primarily by shear stress on vertical, fault-parallel planes. Because strain rates are higher directly below the fault than far from it, stresses are also higher. Thus, the upper crust far from the fault bears a substantial part of the tectonic load, resulting in unrealistically high stresses. In the real Earth, this might lead to distributed plastic deformation or formation of subparallel faults. Alternatively, fault pore pressures in excess of hydrostatic and/or weakening mechanisms such as grain size reduction and thermo-mechanical coupling could lower the strength of the ductile fault root in the lower crust and, concomitantly, off-fault upper crustal stresses.
Modified two-layer social force model for emergency earthquake evacuation
NASA Astrophysics Data System (ADS)
Zhang, Hao; Liu, Hong; Qin, Xin; Liu, Baoxi
2018-02-01
Studies of crowd behavior with related research on computer simulation provide an effective basis for architectural design and effective crowd management. Based on low-density group organization patterns, a modified two-layer social force model is proposed in this paper to simulate and reproduce a group gathering process. First, this paper studies evacuation videos from the Luan'xian earthquake in 2012, and extends the study of group organization patterns to a higher density. Furthermore, taking full advantage of the strength in crowd gathering simulations, a new method on grouping and guidance is proposed while using crowd dynamics. Second, a real-life grouping situation in earthquake evacuation is simulated and reproduced. Comparing with the fundamental social force model and existing guided crowd model, the modified model reduces congestion time and truly reflects group behaviors. Furthermore, the experiment result also shows that a stable group pattern and a suitable leader could decrease collision and allow a safer evacuation process.
NASA Astrophysics Data System (ADS)
Tao, W.; Wan, Y.; Wang, K.; Zeng, Y.; Shen, Z.
2009-12-01
We model stress evolution and crustal deformation associated with the seismogenic process of the 2008 Mw7.9 Wenchuan, China earthquake. This earthquake ruptured a section of the Longmen Shan fault, which is a listric fault separating the eastern Tibetan plateau at northwest from the Sichuan basin at southeast, with a predominantly thrust component for the southwest section of the fault. Different driving mechanisms have been proposed for the fault system: either by channel flow in the lower crust, or lateral push from the eastern Tibetan plateau on the entire crust. A 2-D finite element model is devised to simulate the tectonic process and test validities of the models. A layered viscoelastic media is prescribed, and constrained from seismological and other geophysical investigation results, characterized with a weak lower crust in the western Tibetan plateau and a strong lower crust in the Sichuan basin. The interseismic, coseismic, and postseismic deformation processes are modeled, under constraints of GPS observed deformation fields during these time periods. Our preliminary result shows concentration of elastic strain energy accumulated mainly surrounding the lower part of the locking section of the seismogenic fault during the interseismic time period, implying larger stress drop at the lower part than at the upper part of the locking section of the fault, assuming a total release of the elastic stress accumulation during an earthquake. The coseismic stress change is the largest at the near field in the hanging-wall, offering explanation of extensive aftershock activities occurred in the region after the Wenchuan mainshock. A more complete picture of stress evolution and interaction between the upper and lower crust in the process during an earthquake cycle will be presented at the meeting.
Modeling earthquake rate changes in Oklahoma and Arkansas: possible signatures of induced seismicity
Llenos, Andrea L.; Michael, Andrew J.
2013-01-01
The rate of ML≥3 earthquakes in the central and eastern United States increased beginning in 2009, particularly in Oklahoma and central Arkansas, where fluid injection has occurred. We find evidence that suggests these rate increases are man‐made by examining the rate changes in a catalog of ML≥3 earthquakes in Oklahoma, which had a low background seismicity rate before 2009, as well as rate changes in a catalog of ML≥2.2 earthquakes in central Arkansas, which had a history of earthquake swarms prior to the start of injection in 2009. In both cases, stochastic epidemic‐type aftershock sequence models and statistical tests demonstrate that the earthquake rate change is statistically significant, and both the background rate of independent earthquakes and the aftershock productivity must increase in 2009 to explain the observed increase in seismicity. This suggests that a significant change in the underlying triggering process occurred. Both parameters vary, even when comparing natural to potentially induced swarms in Arkansas, which suggests that changes in both the background rate and the aftershock productivity may provide a way to distinguish man‐made from natural earthquake rate changes. In Arkansas we also compare earthquake and injection well locations, finding that earthquakes within 6 km of an active injection well tend to occur closer together than those that occur before, after, or far from active injection. Thus, like a change in productivity, a change in interevent distance distribution may also be an indicator of induced seismicity.
Michael, Andrew J.
2012-01-01
Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.
NASA Astrophysics Data System (ADS)
Jones, K. B., II; Saxton, P. T.
2013-12-01
Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two
Earthquakes: Recurrence and Interoccurrence Times
NASA Astrophysics Data System (ADS)
Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.
2008-04-01
The purpose of this paper is to discuss the statistical distributions of recurrence times of earthquakes. Recurrence times are the time intervals between successive earthquakes at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic earthquakes on the San Andreas fault: (1) The Parkfield earthquakes, (2) the sequence of earthquakes identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating earthquakes at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of earthquakes in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of earthquakes obtained from a one million year “Virtual California” simulation of San Andreas earthquakes. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future earthquakes on the San Andreas fault and elsewhere.
Yellowstone volcano-tectonic microseismic cycles constrain models of migrating volcanic fluids
NASA Astrophysics Data System (ADS)
Massin, F.; Farrell, J.; Smith, R. B.
2011-12-01
The objective of our research is to evaluate the source properties of extensive earthquake swarms in and around the 0.64Myr Yellowstone caldera, Yellowstone National Park, that is also the locus of widespread hydrothermal activity and ground deformation. We use earthquake waveforms data to investigate seismic wave multiplets that occur within discrete earthquake sequences. Waveform cross-correlation coefficients are computed from data acquired at six high quality stations that are merged from data of identical earthquakes into multiplets. Multiplets provide important indicators on the rupture process of the distinct seismogenic structures. Our multiplet database allowed evaluation of the seismic-source chronology from 1992 to 2010. We assess the evolution of micro-earthquake triggering by evaluating the evolution of earthquake rates and magnitudes. Some striking differences appear between two kinds of seismic swarms: 1) swarms with a high rate of repeating earthquakes of more than 200 events per day, and 2) swarms with a low rate of repeating earthquakes (less than 20 events per day). The 2010 Madison Plateau, western caldera, and the 2008-2009 Yellowstone Lake, eastern caldera, earthquake swarms are two examples representing respectively cascading relaxation of a uniform stress, and an example of highly concentrated stress perturbation induced by a migrating material. The repeating earthquake pattern methodology was then used to characterize the composition of the migrating material by modelling the migration time-space pattern with a experimental thermo-physical simulations of solidification of a fluid filled propagating dike. Comparison of our results with independent GPS deformation data suggests a most-likely model of rhyolitic-granitic magma intrusion along a vertical dike outlined by the pattern of earthquakes. The magma-hydrothermal mix was modeled with a temperature of 800°C-900°C and an average volumetric injection flux between 1.5 and 5 m3/s. Our
Earthquake Forecasting System in Italy
NASA Astrophysics Data System (ADS)
Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.
2017-12-01
In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).
Modelling low-frequency volcanic earthquakes in a viscoelastic medium with topography
NASA Astrophysics Data System (ADS)
Jousset, P.; Neuberg, J.
2003-04-01
Magma properties are fundamental to explain the volcanic eruption style as well as the generation and propagation of seismic waves. This study focusses on rheological magma properties and their impact on low-frequency volcanic earthquakes. We investigate the effects of anelasticity and topography on the amplitudes and spectra of synthetic low-frequency earthquakes. Using a 2D finite difference scheme, we model the propagation of seismic energy initiated in a fluid-filled conduit embedded in a 2D homogeneous viscoelastic medium with topography. Topography is introduced by using a mapping procedure that stretches the computational rectangular grid into a grid which follows the topography. We model intrinsic attenuation by linear viscoelastic theory and we show that volcanic media can be approximated by a standard linear solid for seismic frequencies (i.e., above 2 Hz). Results demonstrate that attenuation modifies both amplitude and dispersive characteristics of low-frequency earthquakes. Low-frequency events are dispersive by nature; however, if attenuation is introduced, their dispersion characteristics will be altered. The topography modifies the amplitudes, depending on the position of seismographs at the surface. This study shows that we need to take into account attenuation and topography to interpret correctly observed low-frequency volcanic earthquakes. It also suggests that the rheological properties of magmas may be constrained by the analysis of low-frequency seismograms.
Nonextensive models for earthquakes.
Silva, R; França, G S; Vilar, C S; Alcaniz, J S
2006-02-01
We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment epsilon proportional to r3. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.
GPS-derived Coseismic deformations of the 2016 Aktao Ms6.7 earthquake and source modelling
NASA Astrophysics Data System (ADS)
Li, J.; Zhao, B.; Xiaoqiang, W.; Daiqing, L.; Yushan, A.
2017-12-01
On 25th November 2016, a Ms6.7 earthquake occurred on Aktao, a county of Xinjiang, China. This earthquake was the largest earthquake occurred in the northeastern margin of the Pamir Plateau in the last 30 years. By GPS observation, we get the coseismic displacement of this earthquake. The maximum displacement site is located in the Muji Basin, 15km from south of the causative fault. The maximum deformation is down to 0.12m, and 0.10m for coseismic displacement, our results indicate that the earthquake has the characteristics of dextral strike-slip and normal-fault rupture. Based on the GPS results, we inverse the rupture distribution of the earthquake. The source model is consisted of two approximate independent zones with a depth of less than 20km, the maximum displacement of one zone is 0.6m, the other is 0.4m. The total seismic moment is Mw6.6.1 which is calculated by the geodetic inversion. The source model of GPS-derived is basically consistent with that of seismic waveform inversion, and is consistent with the surface rupture distribution obtained from field investigation. According to our inversion calculation, the recurrence period of strong earthquakes similar to this earthquake should be 30 60 years, and the seismic risk of the eastern segment of Muji fault is worthy of attention. This research is financially supported by National Natural Science Foundation of China (Grant No.41374030)
Prospective testing of Coulomb short-term earthquake forecasts
NASA Astrophysics Data System (ADS)
Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.
2009-12-01
Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of
NASA Astrophysics Data System (ADS)
Cocco, M.
2001-12-01
Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to
Vertical deformation through a complete seismic cycle at Isla Santa María, Chile
Wesson, Robert L.; Melnick, Daniel; Cisternas, Marco; Moreno, Marcos; Ely, Lisa
2014-01-01
Individual great earthquakes are posited to release the elastic strain energy that has accumulated over centuries by the gradual movement of tectonic plates1, 2. However, knowledge of plate deformation during a complete seismic cycle—two successive great earthquakes and the intervening interseismic period—remains incomplete3. A complete seismic cycle began in south-central Chile in 1835 with an earthquake of about magnitude 8.5 (refs 4, 5) and ended in 2010 with a magnitude 8.8 earthquake6. During the first earthquake, an uplift of Isla Santa María by 2.4 to 3 m was documented4, 5. In the second earthquake, the island was uplifted7 by 1.8 m. Here we use nautical surveys made in 1804, after the earthquake in 1835 and in 1886, together with modern echo sounder surveys and GPS measurements made immediately before and after the 2010 earthquake, to quantify vertical deformation through the complete seismic cycle. We find that in the period between the two earthquakes, Isla Santa María subsided by about 1.4 m. We simulate the patterns of vertical deformation with a finite-element model and find that they agree broadly with predictions from elastic rebound theory2. However, comparison with geomorphic and geologic records of millennial coastline emergence8, 9 reveal that 10–20% of the vertical uplift could be permanent.
NASA Astrophysics Data System (ADS)
Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.
2017-04-01
We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.
NASA Astrophysics Data System (ADS)
Lal, Sohan; Joshi, A.; Sandeep; Tomer, Monu; Kumar, Parveen; Kuo, Chun-Hsiang; Lin, Che-Min; Wen, Kuo-Liang; Sharma, M. L.
2018-05-01
On 25th April, 2015 a hazardous earthquake of moment magnitude 7.9 occurred in Nepal. Accelerographs were used to record the Nepal earthquake which is installed in the Kumaon region in the Himalayan state of Uttrakhand. The distance of the recorded stations in the Kumaon region from the epicenter of the earthquake is about 420-515 km. Modified semi-empirical technique of modeling finite faults has been used in this paper to simulate strong earthquake at these stations. Source parameters of the Nepal aftershock have been also calculated using the Brune model in the present study which are used in the modeling of the Nepal main shock. The obtained value of the seismic moment and stress drop is 8.26 × 1025 dyn cm and 10.48 bar, respectively, for the aftershock from the Brune model .The simulated earthquake time series were compared with the observed records of the earthquake. The comparison of full waveform and its response spectra has been made to finalize the rupture parameters and its location. The rupture of the earthquake was propagated in the NE-SW direction from the hypocenter with the rupture velocity 3.0 km/s from a distance of 80 km from Kathmandu in NW direction at a depth of 12 km as per compared results.
Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model
NASA Astrophysics Data System (ADS)
Thomas, Marion Y.; Bhat, Harsha S.
2018-05-01
Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.
Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model
NASA Astrophysics Data System (ADS)
Thomas, M. Y.; Bhat, H. S.
2017-12-01
Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aagaard, B; Brocher, T; Dreger, D
2007-02-09
We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sitesmore » throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.« less
NASA Astrophysics Data System (ADS)
Hutchinson, Lauren; Stead, Doug; Rosser, Nick
2017-04-01
Understanding the behaviour of rock slopes in response to earthquake shaking is instrumental in response and relief efforts following large earthquakes as well as to ongoing risk management in earthquake affected areas. Assessment of the effects of seismic shaking on rock slope kinematics requires detailed surveys of the pre- and post-earthquake condition of the slope; however, at present, there is a lack of high resolution monitoring data from pre- and post-earthquake to facilitate characterization of seismically induced slope damage and validate models used to back-analyze rock slope behaviour during and following earthquake shaking. Therefore, there is a need for additional research where pre- and post- earthquake monitoring data is available. This paper presents the results of a direct comparison between terrestrial laser scans (TLS) collected in 2014, the year prior to the 2015 earthquake sequence, with that collected 18 months after the earthquakes and two monsoon cycles. The two datasets were collected using Riegl VZ-1000 and VZ-4000 full waveform laser scanners with high resolution (c. 0.1 m point spacing as a minimum). The scans cover the full landslide affected slope from the toe to the crest. The slope is located in Sindhupalchok District, Central Nepal which experienced some of the highest co-seismic and post-seismic landslide intensities across Nepal due to the proximity to the epicenters (<20 km) of both of the main aftershocks on April 26, 2015 (M 6.7) and May 12, 2015 (M7.3). During the 2015 earthquakes and subsequent 2015 and 2016 monsoons, the slope experienced rockfall and debris flows which are evident in satellite imagery and field photographs. Fracturing of the rock mass associated with the seismic shaking is also evident at scales not accessible through satellite and field observations. The results of change detection between the TLS datasets with an emphasis on quantification of seismically-induced slope damage is presented. Patterns in the
Comprehensive Areal Model of Earthquake-Induced Landslides: Technical Specification and User Guide
Miles, Scott B.; Keefer, David K.
2007-01-01
This report describes the complete design of a comprehensive areal model of earthquakeinduced landslides (CAMEL). This report presents the design process, technical specification of CAMEL. It also provides a guide to using the CAMEL source code and template ESRI ArcGIS map document file for applying CAMEL, both of which can be obtained by contacting the authors. CAMEL is a regional-scale model of earthquake-induced landslide hazard developed using fuzzy logic systems. CAMEL currently estimates areal landslide concentration (number of landslides per square kilometer) of six aggregated types of earthquake-induced landslides - three types each for rock and soil.
Identified EM Earthquake Precursors
NASA Astrophysics Data System (ADS)
Jones, Kenneth, II; Saxton, Patrick
2014-05-01
Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for
Scoring annual earthquake predictions in China
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang; Jiang, Changsheng
2012-02-01
The Annual Consultation Meeting on Earthquake Tendency in China is held by the China Earthquake Administration (CEA) in order to provide one-year earthquake predictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest earthquake expected during the next year. Evaluating the performance of these earthquake predictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these earthquake predictions. Based on a reference model, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson model, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for earthquake magnitudes as the reference model, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference model (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference model. The predictions made by the Annual Consultation Meetings on Earthquake Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference model.
Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.
2008-01-01
We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.
Seismic quiescence in a frictional earthquake model
NASA Astrophysics Data System (ADS)
Braun, Oleg M.; Peyrard, Michel
2018-04-01
We investigate the origin of seismic quiescence with a generalized version of the Burridge-Knopoff model for earthquakes and show that it can be generated by a multipeaked probability distribution of the thresholds at which contacts break. Such a distribution is not assumed a priori but naturally results from the aging of the contacts. We show that the model can exhibit quiescence as well as enhanced foreshock activity, depending on the value of some parameters. This provides a generic understanding for seismic quiescence, which encompasses earlier specific explanations and could provide a pathway for a classification of faults.
Earthquake triggering by transient and static deformations
Gomberg, J.; Beeler, N.M.; Blanpied, M.L.; Bodin, P.
1998-01-01
Observational evidence for both static and transient near-field and far-field triggered seismicity are explained in terms of a frictional instability model, based on a single degree of freedom spring-slider system and rate- and state-dependent frictional constitutive equations. In this study a triggered earthquake is one whose failure time has been advanced by ??t (clock advance) due to a stress perturbation. Triggering stress perturbations considered include square-wave transients and step functions, analogous to seismic waves and coseismic static stress changes, respectively. Perturbations are superimposed on a constant background stressing rate which represents the tectonic stressing rate. The normal stress is assumed to be constant. Approximate, closed-form solutions of the rate-and-state equations are derived for these triggering and background loads, building on the work of Dieterich [1992, 1994]. These solutions can be used to simulate the effects of static and transient stresses as a function of amplitude, onset time t0, and in the case of square waves, duration. The accuracies of the approximate closed-form solutions are also evaluated with respect to the full numerical solution and t0. The approximate solutions underpredict the full solutions, although the difference decreases as t0, approaches the end of the earthquake cycle. The relationship between ??t and t0 differs for transient and static loads: a static stress step imposed late in the cycle causes less clock advance than an equal step imposed earlier, whereas a later applied transient causes greater clock advance than an equal one imposed earlier. For equal ??t, transient amplitudes must be greater than static loads by factors of several tens to hundreds depending on t0. We show that the rate-and-state model requires that the total slip at failure is a constant, regardless of the loading history. Thus a static load applied early in the cycle, or a transient applied at any time, reduces the stress
Hidden Earthquake Potential in Plate Boundary Transition Zones
NASA Astrophysics Data System (ADS)
Furlong, Kevin P.; Herman, Matthew; Govers, Rob
2017-04-01
Plate boundaries can exhibit spatially abrupt changes in their long-term tectonic deformation (and associated kinematics) at triple junctions and other sites of changes in plate boundary structure. How earthquake behavior responds to these abrupt tectonic changes is unclear. The situation may be additionally obscured by the effects of superimposed deformational signals - juxtaposed short-term (earthquake cycle) kinematics may combine to produce a net deformational signal that does not reflect intuition about the actual strain accumulation in the region. Two examples of this effect are in the vicinity of the Mendocino triple junction (MTJ) along the west coast of North America, and at the southern end of the Hikurangi subduction zone, New Zealand. In the region immediately north of the MTJ, GPS-based observed crustal displacements (relative to North America (NAm)) are intermediate between Pacific and Juan de Fuca (JdF) motions. With distance north, these displacements rotate to become more aligned with JdF - NAm displacements, i.e. to motions expected along a coupled subduction interface. The deviation of GPS motions from the coupled subduction interface signal near the MTJ has been previously interpreted to reflect clock-wise rotation of a coastal, crustal block and/or reduced coupling at the southern Cascadia margin. The geologic record of crustal deformation near the MTJ reflects the combined effects of northward crustal shortening (on geologic time scales) associated with the MTJ Crustal Conveyor (Furlong and Govers, 1999) overprinted onto the subduction earthquake cycle signal. With this interpretation, the Cascadia subduction margin appears to be well-coupled along its entire length, consistent with paleo-seismic records of large earthquake ruptures extending to its southern limit. At the Hikurangi to Alpine Fault transition in New Zealand, plate interactions switch from subduction to oblique translation as a consequence of changes in lithospheric structure of
NASA Astrophysics Data System (ADS)
Crowell, B.; Melgar, D.
2017-12-01
The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.
GEM1: First-year modeling and IT activities for the Global Earthquake Model
NASA Astrophysics Data System (ADS)
Anderson, G.; Giardini, D.; Wiemer, S.
2009-04-01
GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for modeling and communicating earthquake risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future earthquakes. GEM will provide a unified set of seismic hazard, risk, and loss modeling tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified earthquake catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all modeling activities, incorporate existing detailed models where possible, and independently test all resulting tools and models. When completed in five years, GEM will have a versatile, penly accessible modeling environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss models to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global
Geist, Eric L.; Titov, Vasily V.; Arcas, Diego; Pollitz, Fred F.; Bilek, Susan L.
2007-01-01
Results from different tsunami forecasting and hazard assessment models are compared with observed tsunami wave heights from the 26 December 2004 Indian Ocean tsunami. Forecast models are based on initial earthquake information and are used to estimate tsunami wave heights during propagation. An empirical forecast relationship based only on seismic moment provides a close estimate to the observed mean regional and maximum local tsunami runup heights for the 2004 Indian Ocean tsunami but underestimates mean regional tsunami heights at azimuths in line with the tsunami beaming pattern (e.g., Sri Lanka, Thailand). Standard forecast models developed from subfault discretization of earthquake rupture, in which deep- ocean sea level observations are used to constrain slip, are also tested. Forecast models of this type use tsunami time-series measurements at points in the deep ocean. As a proxy for the 2004 Indian Ocean tsunami, a transect of deep-ocean tsunami amplitudes recorded by satellite altimetry is used to constrain slip along four subfaults of the M >9 Sumatra–Andaman earthquake. This proxy model performs well in comparison to observed tsunami wave heights, travel times, and inundation patterns at Banda Aceh. Hypothetical tsunami hazard assessments models based on end- member estimates for average slip and rupture length (Mw 9.0–9.3) are compared with tsunami observations. Using average slip (low end member) and rupture length (high end member) (Mw 9.14) consistent with many seismic, geodetic, and tsunami inversions adequately estimates tsunami runup in most regions, except the extreme runup in the western Aceh province. The high slip that occurred in the southern part of the rupture zone linked to runup in this location is a larger fluctuation than expected from standard stochastic slip models. In addition, excess moment release (∼9%) deduced from geodetic studies in comparison to seismic moment estimates may generate additional tsunami energy, if the
NASA Astrophysics Data System (ADS)
Dempsey, David; Suckale, Jenny
2016-05-01
Induced seismicity is of increasing concern for oil and gas, geothermal, and carbon sequestration operations, with several M > 5 events triggered in recent years. Modeling plays an important role in understanding the causes of this seismicity and in constraining seismic hazard. Here we study the collective properties of induced earthquake sequences and the physics underpinning them. In this first paper of a two-part series, we focus on the directivity ratio, which quantifies whether fault rupture is dominated by one (unilateral) or two (bilateral) propagating fronts. In a second paper, we focus on the spatiotemporal and magnitude-frequency distributions of induced seismicity. We develop a model that couples a fracture mechanics description of 1-D fault rupture with fractal stress heterogeneity and the evolving pore pressure distribution around an injection well that triggers earthquakes. The extent of fault rupture is calculated from the equations of motion for two tips of an expanding crack centered at the earthquake hypocenter. Under tectonic loading conditions, our model exhibits a preference for unilateral rupture and a normal distribution of hypocenter locations, two features that are consistent with seismological observations. On the other hand, catalogs of induced events when injection occurs directly onto a fault exhibit a bias toward ruptures that propagate toward the injection well. This bias is due to relatively favorable conditions for rupture that exist within the high-pressure plume. The strength of the directivity bias depends on a number of factors including the style of pressure buildup, the proximity of the fault to failure and event magnitude. For injection off a fault that triggers earthquakes, the modeled directivity bias is small and may be too weak for practical detection. For two hypothetical injection scenarios, we estimate the number of earthquake observations required to detect directivity bias.
What Controls Subduction Earthquake Size and Occurrence?
NASA Astrophysics Data System (ADS)
Ruff, L. J.
2008-12-01
There is a long history of observational studies on the size and recurrence intervals of the large underthrusting earthquakes in subduction zones. In parallel with this documentation of the variability in both recurrence times and earthquake sizes -- both within and amongst subduction zones -- there have been numerous suggestions for what controls size and occurrence. In addition to the intrinsic scientific interest in these issues, there are direct applications to hazards mitigation. In this overview presentation, I review past progress, consider current paradigms, and look toward future studies that offer some resolution of long- standing questions. Given the definition of seismic moment, earthquake size is the product of overall static stress drop, down-dip fault width, and along-strike fault length. The long-standing consensus viewpoint is that for the largest earthquakes in a subduction zone: stress-drop is constant, fault width is the down-dip extent of the seismogenic portion of the plate boundary, but that along-strike fault length can vary from one large earthquake to the next. While there may be semi-permanent segments along a subduction zone, successive large earthquakes can rupture different combinations of segments. Many investigations emphasize the role of asperities within the segments, rather than segment edges. Thus, the question of earthquake size is translated into: "What controls the along-strike segmentation, and what determines which segments will rupture in a particular earthquake cycle?" There is no consensus response to these questions. Over the years, the suggestions for segmentation control include physical features in the subducted plate, physical features in the over-lying plate, and more obscure -- and possibly ever-changing -- properties of the plate interface such as the hydrologic conditions. It seems that the full global answer requires either some unforeseen breakthrough, or the long-term hard work of falsifying all candidate
Demand surge following earthquakes
Olsen, Anna H.
2012-01-01
Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.
Nonextensive models for earthquakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, R.; Franca, G.S.; Vilar, C.S.
2006-02-15
We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment {epsilon}{proportional_to}r{sup 3}. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofisica.more » Although both approaches provide very similar values for the nonextensive parameter q, other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.« less
Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.
2017-01-01
We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as
Earthquake Early Warning Beta Users: Java, Modeling, and Mobile Apps
NASA Astrophysics Data System (ADS)
Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.
2014-12-01
Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an earthquake. A demonstration earthquake early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The beta users receive earthquake information very rapidly in real-time and are providing feedback on their experiences of performance and potential uses within their organization. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. Actions could include: personal safety approaches, such as drop cover, and hold on; automated processes and procedures, such as opening elevator or fire stations doors; or situational awareness. Users are beginning to determine which policy and technological changes may need to be enacted, and funding requirements to implement their automated controls. The use of models and mobile apps are beginning to augment the basic Java desktop applet. Modeling allows beta users to test their early warning responses against various scenarios without having to wait for a real event. Mobile apps are also changing the possible response landscape, providing other avenues for people to receive information. All of these combine to improve business continuity and resiliency.
Precursory changes in seismic velocity for the spectrum of earthquake failure modes
Scuderi, M.M.; Marone, C.; Tinti, E.; Di Stefano, G.; Collettini, C.
2016-01-01
Temporal changes in seismic velocity during the earthquake cycle have the potential to illuminate physical processes associated with fault weakening and connections between the range of fault slip behaviors including slow earthquakes, tremor and low frequency earthquakes1. Laboratory and theoretical studies predict changes in seismic velocity prior to earthquake failure2, however tectonic faults fail in a spectrum of modes and little is known about precursors for those modes3. Here we show that precursory changes of wave speed occur in laboratory faults for the complete spectrum of failure modes observed for tectonic faults. We systematically altered the stiffness of the loading system to reproduce the transition from slow to fast stick-slip and monitored ultrasonic wave speed during frictional sliding. We find systematic variations of elastic properties during the seismic cycle for both slow and fast earthquakes indicating similar physical mechanisms during rupture nucleation. Our data show that accelerated fault creep causes reduction of seismic velocity and elastic moduli during the preparatory phase preceding failure, which suggests that real time monitoring of active faults may be a means to detect earthquake precursors. PMID:27597879
Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale
NASA Astrophysics Data System (ADS)
Sobolev, S. V.; Muldashev, I. A.
2015-12-01
Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the
NASA Astrophysics Data System (ADS)
Huang, Ying; Bevans, W. J.; Xiao, Hai; Zhou, Zhi; Chen, Genda
2012-04-01
During or after an earthquake event, building system often experiences large strains due to shaking effects as observed during recent earthquakes, causing permanent inelastic deformation. In addition to the inelastic deformation induced by the earthquake effect, the post-earthquake fires associated with short fuse of electrical systems and leakage of gas devices can further strain the already damaged structures during the earthquakes, potentially leading to a progressive collapse of buildings. Under these harsh environments, measurements on the involved building by various sensors could only provide limited structural health information. Finite element model analysis, on the other hand, if validated by predesigned experiments, can provide detail structural behavior information of the entire structures. In this paper, a temperature dependent nonlinear 3-D finite element model (FEM) of a one-story steel frame is set up by ABAQUS based on the cited material property of steel from EN 1993-1.2 and AISC manuals. The FEM is validated by testing the modeled steel frame in simulated post-earthquake environments. Comparisons between the FEM analysis and the experimental results show that the FEM predicts the structural behavior of the steel frame in post-earthquake fire conditions reasonably. With experimental validations, the FEM analysis of critical structures could be continuously predicted for structures in these harsh environments for a better assistant to fire fighters in their rescue efforts and save fire victims.
NASA Astrophysics Data System (ADS)
Glasscoe, Margaret T.; Wang, Jun; Pierce, Marlon E.; Yoder, Mark R.; Parker, Jay W.; Burl, Michael C.; Stough, Timothy M.; Granat, Robert A.; Donnellan, Andrea; Rundle, John B.; Ma, Yu; Bawden, Gerald W.; Yuen, Karen
2015-08-01
Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing new capabilities for decision making utilizing remote sensing data and modeling software to provide decision support for earthquake disaster management and response. E-DECIDER incorporates the earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools allows us to provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). This in turn is delivered through standards-compliant web services for desktop and hand-held devices.
Empirical models for the prediction of ground motion duration for intraplate earthquakes
NASA Astrophysics Data System (ADS)
Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.
2017-07-01
Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the
Tsunami Source Modeling of the 2015 Volcanic Tsunami Earthquake near Torishima, South of Japan
NASA Astrophysics Data System (ADS)
Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H.
2017-12-01
An abnormal earthquake occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The earthquake, which hereafter we call "the 2015 Torishima earthquake," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the earthquake can be regarded as a "tsunami earthquake." In the region, similar tsunami earthquakes were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 earthquake were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For modeling its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the model of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding
Dynamic modeling of normal faults of the 2016 Central Italy earthquake sequence
NASA Astrophysics Data System (ADS)
Aochi, Hideo
2017-04-01
The earthquake sequence of the Central Italy in 2016 are characterized mainly by the Mw6.0 24th August, Mw5.9 26th October and Mw6.4 30th October as well as two Mw5.4 earthquakes (24th August, 26th October) (catalogue INGV). They all show normal faulting mechanisms corresponding to the Apennines's tectonics. They are aligned briefly along NNW-SSE axis, and they may not be on a single continuous fault plane. Therefore, dynamic rupture modeling of sequences should be carried out supposing co-planar normal multiple segments. We apply a Boundary Domain Method (BDM, Goto and Bielak, GJI, 2008) coupling a boundary integral equation method and a domain-based method, namely a finite difference method in this study. The Mw6.0 24th August earthquake is modeled. We use the basic information of hypocenter position, focal mechanism and potential ruptured dimension from the INGV catalogue and Tinti et al., GRL, 2016), and begin with a simple condition (homogeneous boundary condition). From our preliminary simulations, it is shown that a uniformly extended rupture model does not fit the near-field ground motions and localized heterogeneity would be required.
Evaluation of earthquake potential in China
NASA Astrophysics Data System (ADS)
Rong, Yufang
I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model
NASA Astrophysics Data System (ADS)
Marc, O.; Hovius, N.; Meunier, P.; Rault, C.
2017-12-01
In tectonically active areas, earthquakes are an important trigger of landslides with significant impact on hillslopes and river evolutions. However, detailed prediction of landslides locations and properties for a given earthquakes remain difficult.In contrast we propose, landscape scale, analytical prediction of bulk coseismic landsliding, that is total landslide area and volume (Marc et al., 2016a) as well as the regional area within which most landslide must distribute (Marc et al., 2017). The prediction is based on a limited number of seismological (seismic moment, source depth) and geomorphological (landscape steepness, threshold acceleration) parameters, and therefore could be implemented in landscape evolution model aiming at engaging with erosion dynamics at the scale of the seismic cycle. To assess the model we have compiled and normalized estimates of total landslide volume, total landslide area and regional area affected by landslides for 40, 17 and 83 earthquakes, respectively. We have found that low landscape steepness systematically leads to overprediction of the total area and volume of landslides. When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about 70% of the cases in our databases. The prediction of regional area affected do not require a calibration for the landscape steepness and gives a prediction within a factor of 2 for 60% of the database. For 7 out of 10 comprehensive inventories we show that our prediction compares well with the smallest region around the fault containing 95% of the total landslide area. This is a significant improvement on a previously published empirical expression based only on earthquake moment.Some of the outliers seems related to exceptional rock mass strength in the epicentral area or shaking duration and other seismic source complexities ignored by the model. Applications include prediction on the mass balance of earthquakes and
Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US
NASA Astrophysics Data System (ADS)
Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.
2015-12-01
Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.
A fragmentation model of earthquake-like behavior in internet access activity
NASA Astrophysics Data System (ADS)
Paguirigan, Antonino A.; Angco, Marc Jordan G.; Bantang, Johnrob Y.
We present a fragmentation model that generates almost any inverse power-law size distribution, including dual-scaled versions, consistent with the underlying dynamics of systems with earthquake-like behavior. We apply the model to explain the dual-scaled power-law statistics observed in an Internet access dataset that covers more than 32 million requests. The non-Poissonian statistics of the requested data sizes m and the amount of time τ needed for complete processing are consistent with the Gutenberg-Richter-law. Inter-event times δt between subsequent requests are also shown to exhibit power-law distributions consistent with the generalized Omori law. Thus, the dataset is similar to the earthquake data except that two power-law regimes are observed. Using the proposed model, we are able to identify underlying dynamics responsible in generating the observed dual power-law distributions. The model is universal enough for its applicability to any physical and human dynamics that is limited by finite resources such as space, energy, time or opportunity.
Simulation of the Burridge-Knopoff model of earthquakes with variable range stress transfer.
Xia, Junchao; Gould, Harvey; Klein, W; Rundle, J B
2005-12-09
Simple models of earthquake faults are important for understanding the mechanisms for their observed behavior, such as Gutenberg-Richter scaling and the relation between large and small events, which is the basis for various forecasting methods. Although cellular automaton models have been studied extensively in the long-range stress transfer limit, this limit has not been studied for the Burridge-Knopoff model, which includes more realistic friction forces and inertia. We find that the latter model with long-range stress transfer exhibits qualitatively different behavior than both the long-range cellular automaton models and the usual Burridge-Knopoff model with nearest-neighbor springs, depending on the nature of the velocity-weakening friction force. These results have important implications for our understanding of earthquakes and other driven dissipative systems.
NASA Astrophysics Data System (ADS)
Xie, J.; Wang, M.; Liu, K.
2017-12-01
The 2008 Wenchuan Ms 8.0 earthquake caused overwhelming destruction to vast mountains areas in Sichuan province. Numerous seismic landslides damaged the forest and vegetation cover, and caused substantial loose sediment piling up in the valleys. The movement and fill-up of loose materials led to riverbeds aggradation, thus made the earthquake-struck area more susceptible to flash floods with increasing frequency and intensity of extreme rainfalls. This study investigated the response of sediment and river channel evolution to different rainfall scenarios after the Wenchuan earthquake. The study area was chosen in a catchment affected by the earthquake in Northeast Sichuan province, China. We employed the landscape evolution model CAESAR-lisflood to explore the material migration rules and then assessed the potential effects under two rainfall scenarios. The model parameters were calibrated using the 2013 extreme rainfall event, and the experimental rainfall scenarios were of different intensity and frequency over a 10-year period. The results indicated that CAESAR-lisflood was well adapted to replicate the sediment migration, particularly the fluvial processes after earthquake. With respect to the effects of rainfall intensity, the erosion severity in upstream gullies and the deposition severity in downstream channels, correspondingly increased with the increasing intensity of extreme rainfalls. The modelling results showed that buildings in the catchment suffered from flash floods increased by more than a quarter from the normal to the enhanced rainfall scenarios in ten years, which indicated a potential threat to the exposures nearby the river channel, in the context of climate change. Simulation on landscape change is of great significance, and contributes to early warning of potential geological risks after earthquake. Attention on the high risk area by local government and the public is highly suggested in our study.
Metamorphic records of multiple seismic cycles during subduction
Hacker, Bradley R.; Seward, Gareth G. E.; Kelley, Chris S.
2018-01-01
Large earthquakes occur in rocks undergoing high-pressure/low-temperature metamorphism during subduction. Rhythmic major-element zoning in garnet is a common product of such metamorphism, and one that must record a fundamental subduction process. We argue that rhythmic major-element zoning in subduction zone garnets from the Franciscan Complex, California, developed in response to growth-dissolution cycles driven by pressure pulses. Using electron probe microanalysis and novel techniques in Raman and synchrotron Fourier transform infrared microspectroscopy, we demonstrate that at least four such pressure pulses, of magnitude 100–350 MPa, occurred over less than 300,000 years. These pressure magnitude and time scale constraints are most consistent with the garnet zoning having resulted from periodic overpressure development-dissipation cycles, related to pore-fluid pressure fluctuations linked to earthquake cycles. This study demonstrates that some metamorphic reactions can track individual earthquake cycles and thereby opens new avenues to the study of seismicity. PMID:29568800
Earthquake precursors: activation or quiescence?
NASA Astrophysics Data System (ADS)
Rundle, John B.; Holliday, James R.; Yoder, Mark; Sachs, Michael K.; Donnellan, Andrea; Turcotte, Donald L.; Tiampo, Kristy F.; Klein, William; Kellogg, Louise H.
2011-10-01
We discuss the long-standing question of whether the probability for large earthquake occurrence (magnitudes m > 6.0) is highest during time periods of smaller event activation, or highest during time periods of smaller event quiescence. The physics of the activation model are based on an idea from the theory of nucleation, that a small magnitude earthquake has a finite probability of growing into a large earthquake. The physics of the quiescence model is based on the idea that the occurrence of smaller earthquakes (here considered as magnitudes m > 3.5) may be due to a mechanism such as critical slowing down, in which fluctuations in systems with long-range interactions tend to be suppressed prior to large nucleation events. To illuminate this question, we construct two end-member forecast models illustrating, respectively, activation and quiescence. The activation model assumes only that activation can occur, either via aftershock nucleation or triggering, but expresses no choice as to which mechanism is preferred. Both of these models are in fact a means of filtering the seismicity time-series to compute probabilities. Using 25 yr of data from the California-Nevada catalogue of earthquakes, we show that of the two models, activation and quiescence, the latter appears to be the better model, as judged by backtesting (by a slight but not significant margin). We then examine simulation data from a topologically realistic earthquake model for California seismicity, Virtual California. This model includes not only earthquakes produced from increases in stress on the fault system, but also background and off-fault seismicity produced by a BASS-ETAS driving mechanism. Applying the activation and quiescence forecast models to the simulated data, we come to the opposite conclusion. Here, the activation forecast model is preferred to the quiescence model, presumably due to the fact that the BASS component of the model is essentially a model for activated seismicity. These
Modelling Psychological Responses to the Great East Japan Earthquake and Nuclear Incident
Goodwin, Robin; Takahashi, Masahito; Sun, Shaojing; Gaines, Stanley O.
2012-01-01
The Great East Japan (Tōhoku/Kanto) earthquake of March 2011was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11–13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants), Tokyo/Chiba (approximately 220 km from the nuclear plants), and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants). Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan). The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events. PMID:22666380
NASA Astrophysics Data System (ADS)
Khachikyan, Galina; Inchin, Alexander; Toyshiev, Nursultan
An analysis of data of global seismological catalog NEIC (National Earthquake Information Center of the U.S. Geological Survey) for 1973-2011 (182933 events with magnitude equal to 4.5 and more) has been carried out with taken into account the geometry of the main geomagnetic field as gives the International Geomagnetic Reference Field (IGRF-11) model. It is found that the greatest number of earthquakes occurs in seismic areas penetrated by the geomagnetic force lines L=1.0-1.1, and additionally, the L-shell distribution of earthquake counting rate is peaked at the L equal to 2.0-2.2, which are inhabited by the Anomalous Cosmic Rays (ACRs). It is revealed that occurrence of strong earthquakes (with magnitude 7.0 and more) in these areas is modulated by the 11 year solar cycle. Namely, during 1973-2011, twenty strong earthquakes occurred in regions where the L=2.0-2.2 are loaned into the earth’s crust and, surprisingly, all of these earthquakes occurred only at the declining phase of the 11 year solar cycles while were absent at the ascending phase. Solar modulation of earthquake occurrence may be explained at present in the frame of a modern idea that earthquake is triggered by the electric currents flowing into the global electric circuit (GEC), where the charged geomagnetic force lines play the role of conductors (field align currents). The operation of GEC depends on intensity of cosmic rays which provide ionization and conductivity of the air in the middle atmosphere. Since the ACRs are especially sensitive to solar modulation, and since they populate the L of 2.0, it may be expected that earthquake occurrence in the areas penetrated by L of 2.0 would be especially sensitive to solar modulation. Our results prove this expectation, but much work is required to study this problem in more details.
NASA Astrophysics Data System (ADS)
Norbeck, J. H.; Rubinstein, J. L.
2018-04-01
The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. We develop a reservoir model to calculate the hydrologic conditions associated with the activity of 902 saltwater disposal wells injecting into the Arbuckle aquifer. Estimates of basement fault stressing conditions inform a rate-and-state friction earthquake nucleation model to forecast the seismic response to injection. Our model replicates many salient features of the induced earthquake sequence, including the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. We present evidence for variable time lags between changes in injection and seismicity rates, consistent with the prediction from rate-and-state theory that seismicity rate transients occur over timescales inversely proportional to stressing rate. Given the efficacy of the hydromechanical model, as confirmed through a likelihood statistical test, the results of this study support broader integration of earthquake physics within seismic hazard analysis.
Self-Organized Criticality in an Anisotropic Earthquake Model
NASA Astrophysics Data System (ADS)
Li, Bin-Quan; Wang, Sheng-Jun
2018-03-01
We have made an extensive numerical study of a modified model proposed by Olami, Feder, and Christensen to describe earthquake behavior. Two situations were considered in this paper. One situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly not averagely and keeps itself to zero. The other situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly and keeps some energy for itself instead of reset to zero. Different boundary conditions were considered as well. By analyzing the distribution of earthquake sizes, we found that self-organized criticality can be excited only in the conservative case or the approximate conservative case in the above situations. Some evidence indicated that the critical exponent of both above situations and the original OFC model tend to the same result in the conservative case. The only difference is that the avalanche size in the original model is bigger. This result may be closer to the real world, after all, every crust plate size is different. Supported by National Natural Science Foundation of China under Grant Nos. 11675096 and 11305098, the Fundamental Research Funds for the Central Universities under Grant No. GK201702001, FPALAB-SNNU under Grant No. 16QNGG007, and Interdisciplinary Incubation Project of SNU under Grant No. 5
Recent Mega-Thrust Tsunamigenic Earthquakes and PTHA
NASA Astrophysics Data System (ADS)
Lorito, S.
2013-05-01
The occurrence of several mega-thrust tsunamigenic earthquakes in the last decade, including but not limited to the 2004 Sumatra-Andaman, the 2010 Maule, and 2011 Tohoku earthquakes, has been a dramatic reminder of the limitations in our capability of assessing earthquake and tsunami hazard and risk. However, the increasingly high-quality geophysical observational networks allowed the retrieval of most accurate than ever models of the rupture process of mega-thrust earthquakes, thus paving the way for future improved hazard assessments. Probabilistic Tsunami Hazard Analysis (PTHA) methodology, in particular, is less mature than its seismic counterpart, PSHA. Worldwide recent research efforts of the tsunami science community allowed to start filling this gap, and to define some best practices that are being progressively employed in PTHA for different regions and coasts at threat. In the first part of my talk, I will briefly review some rupture models of recent mega-thrust earthquakes, and highlight some of their surprising features that likely result in bigger error bars associated to PTHA results. More specifically, recent events of unexpected size at a given location, and with unexpected rupture process features, posed first-order open questions which prevent the definition of an heterogeneous rupture probability along a subduction zone, despite of several recent promising results on the subduction zone seismic cycle. In the second part of the talk, I will dig a bit more into a specific ongoing effort for improving PTHA methods, in particular as regards epistemic and aleatory uncertainties determination, and the computational PTHA feasibility when considering the full assumed source variability. Only logic trees are usually explicated in PTHA studies, accounting for different possible assumptions on the source zone properties and behavior. The selection of the earthquakes to be actually modelled is then in general made on a qualitative basis or remains implicit
NASA Astrophysics Data System (ADS)
Ohyanagi, S.; Dileonardo, C.
2013-12-01
As a natural phenomenon earthquake occurrence is difficult to predict. Statistical analysis of earthquake data was performed using candlestick chart and Bollinger Band methods. These statistical methods, commonly used in the financial world to analyze market trends were tested against earthquake data. Earthquakes above Mw 4.0 located on shore of Sanriku (37.75°N ~ 41.00°N, 143.00°E ~ 144.50°E) from February 1973 to May 2013 were selected for analysis. Two specific patterns in earthquake occurrence were recognized through the analysis. One is a spread of candlestick prior to the occurrence of events greater than Mw 6.0. A second pattern shows convergence in the Bollinger Band, which implies a positive or negative change in the trend of earthquakes. Both patterns match general models for the buildup and release of strain through the earthquake cycle, and agree with both the characteristics of the candlestick chart and Bollinger Band analysis. These results show there is a high correlation between patterns in earthquake occurrence and trend analysis by these two statistical methods. The results of this study agree with the appropriateness of the application of these financial analysis methods to the analysis of earthquake occurrence.
Jaiswal, Kishor; Wald, D.J.
2013-01-01
This chapter summarizes the state-of-the-art for rapid earthquake impact estimation. It details the needs and challenges associated with quick estimation of earthquake losses following global earthquakes, and provides a brief literature review of various approaches that have been used in the past. With this background, the chapter introduces the operational earthquake loss estimation system developed by the U.S. Geological Survey (USGS) known as PAGER (for Prompt Assessment of Global Earthquakes for Response). It also details some of the ongoing developments of PAGER’s loss estimation models to better supplement the operational empirical models, and to produce value-added web content for a variety of PAGER users.
Continuing megathrust earthquake potential in Chile after the 2014 Iquique earthquake
Hayes, Gavin P.; Herman, Matthew W.; Barnhart, William D.; Furlong, Kevin P.; Riquelme, Sebástian; Benz, Harley M.; Bergman, Eric; Barrientos, Sergio; Earle, Paul S.; Samsonov, Sergey
2014-01-01
The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past earthquakes (see, for example, ref. 2) and is useful for qualitatively describing where large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile which had not ruptured in a megathrust earthquake since a M ~8.8 event in 1877. On 1 April 2014 a M 8.2 earthquake occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March–April 2014 Iquique sequence, including analyses of earthquake relocations, moment tensors, finite fault models, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the earthquake that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust earthquakes will occur to the south and potentially to the north of the 2014 Iquique sequence.
Continuing megathrust earthquake potential in Chile after the 2014 Iquique earthquake.
Hayes, Gavin P; Herman, Matthew W; Barnhart, William D; Furlong, Kevin P; Riquelme, Sebástian; Benz, Harley M; Bergman, Eric; Barrientos, Sergio; Earle, Paul S; Samsonov, Sergey
2014-08-21
The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past earthquakes (see, for example, ref. 2) and is useful for qualitatively describing where large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile, which had not ruptured in a megathrust earthquake since a M ∼8.8 event in 1877. On 1 April 2014 a M 8.2 earthquake occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March-April 2014 Iquique sequence, including analyses of earthquake relocations, moment tensors, finite fault models, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the earthquake that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust earthquakes will occur to the south and potentially to the north of the 2014 Iquique sequence.
Raccanello, Daniela; Burro, Roberto; Hall, Rob
2017-01-01
We explored whether and how the exposure to a natural disaster such as the 2012 Emilia Romagna earthquake affected the development of children's emotional competence in terms of understanding, regulating, and expressing emotions, after two years, when compared with a control group not exposed to the earthquake. We also examined the role of class level and gender. The sample included two groups of children (n = 127) attending primary school: The experimental group (n = 65) experienced the 2012 Emilia Romagna earthquake, while the control group (n = 62) did not. The data collection took place two years after the earthquake, when children were seven or ten-year-olds. Beyond assessing the children's understanding of emotions and regulating abilities with standardized instruments, we employed semi-structured interviews to explore their knowledge of earthquakes and associated emotions, and a structured task on the intensity of some target emotions. We applied Generalized Linear Mixed Models. Exposure to the earthquake did not influence the understanding and regulation of emotions. The understanding of emotions varied according to class level and gender. Knowledge of earthquakes, emotional language, and emotions associated with earthquakes were, respectively, more complex, frequent, and intense for children who had experienced the earthquake, and at increasing ages. Our data extend the generalizability of theoretical models on children's psychological functioning following disasters, such as the dose-response model and the organizational-developmental model for child resilience, and provide further knowledge on children's emotional resources related to natural disasters, as a basis for planning educational prevention programs.
Far-Field and Middle-Field Vertical Velocities Associated with Megathrust Earthquakes
NASA Astrophysics Data System (ADS)
Fleitout, L.; Trubienko, O.; Klein, E.; Vigny, C.; Garaud, J.; Shestakov, N.; Satirapod, C.; Simons, W. J.
2013-12-01
The recent megathrust earthquakes (Sumatra, Chili and Japan) have induced far-field postseismic subsidence with velocities from a few mm/yr to more than 1cm/yr at distances from 500 to 1500km from the earthquake epicentre, for several years following the earthquake. This subsidence is observed in Argentina, China, Korea, far-East Russia and in Malaysia and Thailand as reported by Satirapod et al. ( ASR, 2013). In the middle-field a very pronounced uplift is localized on the flank of the volcanic arc facing the trench. This is observed both over Honshu, in Chile and on the South-West coast of Sumatra. In Japan, the deformations prior to Tohoku earthquake are well measured by the GSI GPS network: While the East coast was slightly subsiding, the West coast was raising. A 3D finite element code (Zebulon-Zset) is used to understand the deformations through the seismic cycle in the areas surrounding the last three large subduction earthquakes. The meshes designed for each region feature a broad spherical shell portion with a viscoelastic asthenosphere. They are refined close to the subduction zones. Using these finite element models, we find that the pattern of the predicted far-field vertical postseismic displacements depends upon the thicknesses of the elastic plate and of the low viscosity asthenosphere. A low viscosity asthenosphere at shallow depth, just below the lithosphere is required to explain the subsidence at distances from 500 to 1500km. A thick (for example 600km) asthenosphere with a uniform viscosity predicts subsidence too far away from the trench. Slip on the subduction interface is unable tot induce the observed far-field subsidence. However, a combination of relaxation in a low viscosity wedge and slip or relaxation on the bottom part of the subduction interface is necessary to explain the observed postseismic uplift in the middle-field (volcanic arc area). The creep laws of the various zones used to explain the postseismic data can be injected in
Testing hypotheses of earthquake occurrence
NASA Astrophysics Data System (ADS)
Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.
2003-12-01
We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of
Connecting slow earthquakes to huge earthquakes.
Obara, Kazushige; Kato, Aitaro
2016-07-15
Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.
Do weak global stresses synchronize earthquakes?
NASA Astrophysics Data System (ADS)
Bendick, R.; Bilham, R.
2017-08-01
Insofar as slip in an earthquake is related to the strain accumulated near a fault since a previous earthquake, and this process repeats many times, the earthquake cycle approximates an autonomous oscillator. Its asymmetric slow accumulation of strain and rapid release is quite unlike the harmonic motion of a pendulum and need not be time predictable, but still resembles a class of repeating systems known as integrate-and-fire oscillators, whose behavior has been shown to demonstrate a remarkable ability to synchronize to either external or self-organized forcing. Given sufficient time and even very weak physical coupling, the phases of sets of such oscillators, with similar though not necessarily identical period, approach each other. Topological and time series analyses presented here demonstrate that earthquakes worldwide show evidence of such synchronization. Though numerous studies demonstrate that the composite temporal distribution of major earthquakes in the instrumental record is indistinguishable from random, the additional consideration of event renewal interval serves to identify earthquake groupings suggestive of synchronization that are absent in synthetic catalogs. We envisage the weak forces responsible for clustering originate from lithospheric strain induced by seismicity itself, by finite strains over teleseismic distances, or by other sources of lithospheric loading such as Earth's variable rotation. For example, quasi-periodic maxima in rotational deceleration are accompanied by increased global seismicity at multidecadal intervals.
Regional Earthquake Likelihood Models: A realm on shaky grounds?
NASA Astrophysics Data System (ADS)
Kossobokov, V.
2005-12-01
Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency
Burro, Roberto; Hall, Rob
2017-01-01
A major earthquake has a potentially highly traumatic impact on children’s psychological functioning. However, while many studies on children describe negative consequences in terms of mental health and psychiatric disorders, little is known regarding how the developmental processes of emotions can be affected following exposure to disasters. Objectives We explored whether and how the exposure to a natural disaster such as the 2012 Emilia Romagna earthquake affected the development of children’s emotional competence in terms of understanding, regulating, and expressing emotions, after two years, when compared with a control group not exposed to the earthquake. We also examined the role of class level and gender. Method The sample included two groups of children (n = 127) attending primary school: The experimental group (n = 65) experienced the 2012 Emilia Romagna earthquake, while the control group (n = 62) did not. The data collection took place two years after the earthquake, when children were seven or ten-year-olds. Beyond assessing the children’s understanding of emotions and regulating abilities with standardized instruments, we employed semi-structured interviews to explore their knowledge of earthquakes and associated emotions, and a structured task on the intensity of some target emotions. Results We applied Generalized Linear Mixed Models. Exposure to the earthquake did not influence the understanding and regulation of emotions. The understanding of emotions varied according to class level and gender. Knowledge of earthquakes, emotional language, and emotions associated with earthquakes were, respectively, more complex, frequent, and intense for children who had experienced the earthquake, and at increasing ages. Conclusions Our data extend the generalizability of theoretical models on children’s psychological functioning following disasters, such as the dose-response model and the organizational-developmental model for child resilience, and
NASA Astrophysics Data System (ADS)
Moyer, P. A.; Boettcher, M. S.; McGuire, J. J.; Collins, J. A.
2017-12-01
During the last five seismic cycles on Gofar transform fault on the East Pacific Rise, the largest earthquakes (6.0 ≤ Mw ≤ 6.2) have repeatedly ruptured the same fault segment (rupture asperity), while intervening fault segments host swarms of microearthquakes. Previous studies on Gofar have shown that these segments of low (≤10%) seismic coupling contain diffuse zones of seismicity and P-wave velocity reduction compared with the rupture asperity; suggesting heterogeneous fault properties control earthquake behavior. We investigate the role systematic differences in material properties have on earthquake rupture along Gofar using waveforms from ocean bottom seismometers that recorded the end of the 2008 Mw 6.0 seismic cycle.We determine stress drop for 117 earthquakes (2.4 ≤ Mw ≤ 4.2) that occurred in and between rupture asperities from corner frequency derived using an empirical Green's function spectral ratio method and seismic moment obtained by fitting the omega-square source model to the low frequency amplitude of earthquake spectra. We find stress drops from 0.03 to 2.7 MPa with significant spatial variation, including 2 times higher average stress drop in the rupture asperity compared to fault segments with low seismic coupling. We interpret an inverse correlation between stress drop and P-wave velocity reduction as the effect of damage on earthquake rupture. Earthquakes with higher stress drops occur in more intact crust of the rupture asperity, while earthquakes with lower stress drops occur in regions of low seismic coupling and reflect lower strength, highly fractured fault zone material. We also observe a temporal control on stress drop consistent with log-time healing following the Mw 6.0 mainshock, suggesting a decrease in stress drop as a result of fault zone damage caused by the large earthquake.
The Role of Deep Creep in the Timing of Large Earthquakes
NASA Astrophysics Data System (ADS)
Sammis, C. G.; Smith, S. W.
2012-12-01
The observed temporal clustering of the world's largest earthquakes has been largely discounted for two reasons: a) it is consistent with Poisson clustering, and b) no physical mechanism leading to such clustering has been proposed. This lack of a mechanism arises primarily because the static stress transfer mechanism, commonly used to explain aftershocks and the clustering of large events on localized fault networks, does not work at global distances. However, there is recent observational evidence that the surface waves from large earthquakes trigger non-volcanic tremor at the base of distant fault zones at global distances. Based on these observations, we develop a simple non-linear coupled oscillator model that shows how the triggering of such tremor can lead to the synchronization of large earthquakes on a global scale. A basic assumption of the model is that induced tremor is a proxy for deep creep that advances the seismic cycle of the fault. We support this hypothesis by demonstrating that the 2010 Maule Chile and the 2011 Fukushima Japan earthquakes, which have been shown to induce tremor on the Parkfield segment of the San Andreas Fault, also produce changes in off-fault seismicity that are spatially and temporally consistent with episodes of deep creep on the fault. The observed spatial pattern can be simulated using an Okada dislocation model for deep creep (below 20 km) on the fault plane in which the slip rate decreases from North to South consistent with surface creep measurements and deepens south of the "Parkfield asperity" as indicated by recent tremor locations. The model predicts the off-fault events should have reverse mechanism consistent with observed topography.
The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?
Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G
1986-07-25
A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.
Extending earthquakes' reach through cascading.
Marsan, David; Lengliné, Olivier
2008-02-22
Earthquakes, whatever their size, can trigger other earthquakes. Mainshocks cause aftershocks to occur, which in turn activate their own local aftershock sequences, resulting in a cascade of triggering that extends the reach of the initial mainshock. A long-lasting difficulty is to determine which earthquakes are connected, either directly or indirectly. Here we show that this causal structure can be found probabilistically, with no a priori model nor parameterization. Large regional earthquakes are found to have a short direct influence in comparison to the overall aftershock sequence duration. Relative to these large mainshocks, small earthquakes collectively have a greater effect on triggering. Hence, cascade triggering is a key component in earthquake interactions.
Lack of Dependence of Dynamic Triggering on the Timing within the Seismic Cycle
NASA Astrophysics Data System (ADS)
Cattania, C.; McGuire, J. J.; Collins, J. A.
2009-12-01
Numerical models predict that dynamic triggering of earthquakes is more likely when faults are close to failure (e.g. late in their earthquake cycle), and laboratory experiments have supported this hypothesis. We attempted to test this idea by analysing data on three adjacent transform faults of the East Pacific Rise which have a relatively well defined, quasiperiodic seismic cycle with a median repeat time of 5 years. Moreover, the Gofar, Discovery and Quebrada transform faults share several seismicity properties with continental geothermal areas, including high geothermal gradients, high seismicity rates, and frequent earthquake swarms, that suggest they may be prone to dynamic triggering. We analyze an earthquake catalog of over 100,000 events recorded in 2008 by a network of 38 Ocean Bottom Seismometers. We extract Mw>6.3 mainshocks from the Global CMT catalog, and perform the β test for an array of time intervals covering from 5 hours before to 10 hours after the low-frequency Rayleigh wave arrival. To verify the presence of common seismicity patterns, β plots are also stacked for multiple earthquakes. We observe triggering after the May 12th Wenchuan earthquake. On the Quebrada transform a burst of seismicity starts during the wavetrain; in Gofar there is no response during the wave, but an increase in seismicity (β=5.08) starts about 2 h later; no triggering is visible on the Discovery fault. A Mw=6.0 earthquake ruptured the Gofar transform on September 18th, and triggered seismicity on Discovery: ~60 earthquakes (β=15.3), starting 1h after the wave arrival. We have no data from Quebrada for this period. Other instances of triggering are dubious. Stacked β plots suggest delayed triggering (Δt>1h) in Gofar and Discovery, but the statistical significance of these results is unclear. From a comparison of different fault segments, triggering does not appear to be more common at late stages in the seismic cycle. Instead, the events triggered by the largest
Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions
NASA Astrophysics Data System (ADS)
Kim, A.; Dreger, D.; Larsen, S.
2008-12-01
We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
NASA Astrophysics Data System (ADS)
Castaldo, Raffaele; Tizzani, Pietro
2016-04-01
Many numerical models have been developed to simulate the deformation and stress changes associated to the faulting process. This aspect is an important topic in fracture mechanism. In the proposed study, we investigate the impact of the deep fault geometry and tectonic setting on the co-seismic ground deformation pattern associated to different earthquake phenomena. We exploit the impact of the structural-geological data in Finite Element environment through an optimization procedure. In this framework, we model the failure processes in a physical mechanical scenario to evaluate the kinematics associated to the Mw 6.1 L'Aquila 2009 earthquake (Italy), the Mw 5.9 Ferrara and Mw 5.8 Mirandola 2012 earthquake (Italy) and the Mw 8.3 Gorkha 2015 earthquake (Nepal). These seismic events are representative of different tectonic scenario: the normal, the reverse and thrust faulting processes, respectively. In order to simulate the kinematic of the analyzed natural phenomena, we assume, under the plane stress approximation (is defined to be a state of stress in which the normal stress, sz, and the shear stress sxz and syz, directed perpendicular to x-y plane are assumed to be zero), the linear elastic behavior of the involved media. The performed finite element procedure consist of through two stages: (i) compacting under the weight of the rock successions (gravity loading), the deformation model reaches a stable equilibrium; (ii) the co-seismic stage simulates, through a distributed slip along the active fault, the released stresses. To constrain the models solution, we exploit the DInSAR deformation velocity maps retrieved by satellite data acquired by old and new generation sensors, as ENVISAT, RADARSAT-2 and SENTINEL 1A, encompassing the studied earthquakes. More specifically, we first generate 2D several forward mechanical models, then, we compare these with the recorded ground deformation fields, in order to select the best boundaries setting and parameters. Finally
Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model
NASA Astrophysics Data System (ADS)
Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza
2017-08-01
Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.
The Mw 7.7 Bhuj earthquake: Global lessons for earthquake hazard in intra-plate regions
Schweig, E.; Gomberg, J.; Petersen, M.; Ellis, M.; Bodin, P.; Mayrose, L.; Rastogi, B.K.
2003-01-01
The Mw 7.7 Bhuj earthquake occurred in the Kachchh District of the State of Gujarat, India on 26 January 2001, and was one of the most damaging intraplate earthquakes ever recorded. This earthquake is in many ways similar to the three great New Madrid earthquakes that occurred in the central United States in 1811-1812, An Indo-US team is studying the similarities and differences of these sequences in order to learn lessons for earthquake hazard in intraplate regions. Herein we present some preliminary conclusions from that study. Both the Kutch and New Madrid regions have rift type geotectonic setting. In both regions the strain rates are of the order of 10-9/yr and attenuation of seismic waves as inferred from observations of intensity and liquefaction are low. These strain rates predict recurrence intervals for Bhuj or New Madrid sized earthquakes of several thousand years or more. In contrast, intervals estimated from paleoseismic studies and from other independent data are significantly shorter, probably hundreds of years. All these observations together may suggest that earthquakes relax high ambient stresses that are locally concentrated by rheologic heterogeneities, rather than loading by plate-tectonic forces. The latter model generally underlies basic assumptions made in earthquake hazard assessment, that the long-term average rate of energy released by earthquakes is determined by the tectonic loading rate, which thus implies an inherent average periodicity of earthquake occurrence. Interpreting the observations in terms of the former model therefore may require re-examining the basic assumptions of hazard assessment.
Testing earthquake source inversion methodologies
Page, M.; Mai, P.M.; Schorlemmer, D.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Stein, R.S.; King, G.C.P.; Rundle, J.B.
1988-01-01
A strong test of our understanding of the earthquake cycle is the ability to reproduce extant faultbounded geological structures, such as basins and ranges, which are built by repeated cycles of deformation. Three examples are considered for which the structure and fault geometry are well known: the White Wolf reverse fault in California, site of the 1952 Kern County M=7.3 earthquake, the Lost River normal fault in Idaho, site of the 1983 Borah Peak M=7.0 earthquake, and the Cricket Mountain normal fault in Utah, site of Quaternary slip events. Basin stratigraphy and seismic reflection records are used to profile the structure, and coseismic deformation measured by leveling surveys is used to estimate the fault geometry. To reproduce these structures, we add the deformation associated with the earthquake cycle (the coseismic slip and postseismic relaxation) to the flexure caused by the observed sediment load, treating the crust as a thin elastic plate overlying a fluid substrate. -from Authors
NASA Astrophysics Data System (ADS)
Wong, N. Z.; Feng, L.; Hill, E.
2017-12-01
The Sumatran plate boundary has experienced five Mw > 8 great earthquakes, a handful of Mw 7-8 earthquakes and numerous small to moderate events since the 2004 Mw 9.2 Sumatra-Andaman earthquake. The geodetic studies of these moderate earthquakes have mostly been passed over in favour of larger events. We therefore in this study present a catalog of coseismic uniform-slip models of one Mw 7.2 earthquake and 17 Mw 5.9-6.9 events that have mostly gone geodetically unstudied. These events occurred close to various continuous stations within the Sumatran GPS Array (SuGAr), allowing the network to record their surface deformation. However, due to their relatively small magnitudes, most of these moderate earthquakes were recorded by only 1-4 GPS stations. With the limited observations per event, we first constrain most of the model parameters (e.g. location, slip, patch size, strike, dip, rake) using various external sources (e.g., the ANSS catalog, gCMT, Slab1.0, and empirical relationships). We then use grid-search forward models to explore a range of some of these parameters (geographic position for all events and additionally depth for some events). Our results indicate the gCMT centroid locations in the Sumatran subduction zone might be biased towards the west for smaller events, while ANSS epicentres might be biased towards the east. The more accurate locations of these events are potentially useful in understanding the nature of various structures along the megathrust, particularly the persistent rupture barriers.
NASA Astrophysics Data System (ADS)
Pondard, Nicolas; Armijo, Rolando; King, Geoffrey C. P.; Meyer, Bertrand; Ucarkus, Gulsen
2010-05-01
Seismotectonic methods allowing quantitative measures of the frequency and severity of earthquakes have greatly advanced over the last 30 years, aided by high-resolution imagery, digital topography and modern techniques for dating. During the same period, deterministic models based on the physics of earthquakes (Coulomb stress interactions) have been extensively developed to explain the distribution of earthquakes in space and time. Seismotectonic data and Coulomb Stress models provide valuable information on seismic hazard and could assist the public policy, disaster risk management and financial risk transfer communities to make more informed decisions around their strategic planning and risk management activities. The Sea of Marmara and Istanbul regions (North Anatolian Fault, NAF) are among the most appropriate on Earth to analyse seismic hazard, because reliable data covers almost completely two seismic cycles (the past ~500 years). Earthquake ruptures associated with historical events have been found in the direct vicinity of the city, on the Marmara sea floor. The MARMARASCARPS cruise using an unmanned submersible (ROV) provides direct observations to study the morphology and geology of those ruptures, their distribution and geometry. These observations are crucial to quantify the magnitude of past earthquakes along the submarine fault system (e.g. 1894, 1912, 1999, M > 7). In particular, the identification of a break continuous over 60 km with a right-lateral slip of 5 m, corresponding probably to the offshore extension of the Ganos earthquake rupture (1912, Ms 7.4), modifies substantially our understanding of the current state of loading along the NAF next to Istanbul. Coulomb stress analysis is used to characterise loading evolution in well-identified fault segments, including secular loading from below and lateral loading imposed by the occurrence of previous earthquakes. The 20th century earthquake sequence in the region of Istanbul is modelled using
NASA Astrophysics Data System (ADS)
Norbeck, J. H.; Rubinstein, J. L.
2017-12-01
The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. In this work, we demonstrate that the basement fault stressing conditions that drive seismicity rate evolution are related directly to the operational history of 958 saltwater disposal wells completed in the Arbuckle aquifer. We developed a fluid pressurization model based on the assumption that pressure changes are dominated by reservoir compressibility effects. Using injection well data, we established a detailed description of the temporal and spatial variability in stressing conditions over the 21.5-year period from January 1995 through June 2017. With this stressing history, we applied a numerical model based on rate-and-state friction theory to generate seismicity rate forecasts across a broad range of spatial scales. The model replicated the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. The behavior of the induced earthquake sequence was consistent with the prediction from rate-and-state theory that the system evolves toward a steady seismicity rate depending on the ratio between the current and background stressing rates. Seismicity rate transients occurred over characteristic timescales inversely proportional to stressing rate. We found that our hydromechanical earthquake rate model outperformed observational and empirical forecast models for one-year forecast durations over the period 2008 through 2016.
NASA Astrophysics Data System (ADS)
Bichisao, Marta; Stallone, Angela
2017-04-01
Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.
Spatial Evaluation and Verification of Earthquake Simulators
NASA Astrophysics Data System (ADS)
Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.
2017-06-01
In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.
Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment
NASA Technical Reports Server (NTRS)
Rebbapragada, Umaa; Oommen, Thomas
2011-01-01
On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.
Issues on the Japanese Earthquake Hazard Evaluation
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Fukushima, Y.; Sagiya, T.
2013-12-01
The 2011 Great East Japan Earthquake forced the policy of counter-measurements to earthquake disasters, including earthquake hazard evaluations, to be changed in Japan. Before the March 11, Japanese earthquake hazard evaluation was based on the history of earthquakes that repeatedly occurs and the characteristic earthquake model. The source region of an earthquake was identified and its occurrence history was revealed. Then the conditional probability was estimated using the renewal model. However, the Japanese authorities changed the policy after the megathrust earthquake in 2011 such that the largest earthquake in a specific seismic zone should be assumed on the basis of available scientific knowledge. According to this policy, three important reports were issued during these two years. First, the Central Disaster Management Council issued a new estimate of damages by a hypothetical Mw9 earthquake along the Nankai trough during 2011 and 2012. The model predicts a 34 m high tsunami on the southern Shikoku coast and intensity 6 or higher on the JMA scale in most area of Southwest Japan as the maximum. Next, the Earthquake Research Council revised the long-term earthquake hazard evaluation of earthquakes along the Nankai trough in May 2013, which discarded the characteristic earthquake model and put much emphasis on the diversity of earthquakes. The so-called 'Tokai' earthquake was negated in this evaluation. Finally, another report by the CDMC concluded that, with the current knowledge, it is hard to predict the occurrence of large earthquakes along the Nankai trough using the present techniques, based on the diversity of earthquake phenomena. These reports created sensations throughout the country and local governments are struggling to prepare counter-measurements. These reports commented on large uncertainty in their evaluation near their ends, but are these messages transmitted properly to the public? Earthquake scientists, including authors, are involved in
Long-term Postseismic Deformation Following the 1964 Alaska Earthquake
NASA Astrophysics Data System (ADS)
Freymueller, J. T.; Cohen, S. C.; Hreinsdöttir, S.; Suito, H.
2003-12-01
Geodetic data provide a rich data set describing the postseismic deformation that followed the 1964 Alaska earthquake (Mw 9.2). This is particularly true for vertical deformation, since tide gauges and leveling surveys provide extensive spatial coverage. Leveling was carried out over all of the major roads of Alaska in 1964-65, and over the last several years we have resurveyed an extensive data set using GPS. Along Turnagain Arm of Cook Inlet, south of Anchorage, a trench-normal profile was surveyed repeatedly over the first decade after the earthquake, and many of these sites have been surveyed with GPS. After using a geoid model to correct for the difference between geometric and orthometric heights, the leveling+GPS surveys reveal up to 1.25 meters of uplift since 1964. The largest uplifts are concentrated in the northern part of the Kenai Peninsula, SW of Turnagain Arm. In some places, steep gradients in the cumulative uplift measurements point to a very shallow source for the deformation. The average 1964-late 1990s uplift rates were substantially higher than the present-day uplift rates, which rarely exceed 10 mm/yr. Both leveling and tide gauge data document a decay in uplift rate over time as the postseismic signal decreases. However, even today the postseismic deformation represents a substantial portion of the total observe deformation signal, illustrating that very long-lived postseismic deformation is an important element of the subduction zone earthquake cycle for the very largest earthquakes. This is in contrast to much smaller events, such as M~8 earthquakes, for which postseismic deformation in many cases decays within a few years. This suggests that the very largest earthquakes may excite different processes than smaller events.
The 1906 earthquake and a century of progress in understanding earthquakes and their hazards
Zoback, M.L.
2006-01-01
The 18 April 1906 San Francisco earthquake killed nearly 3000 people and left 225,000 residents homeless. Three days after the earthquake, an eight-person Earthquake Investigation Commission composed of 25 geologists, seismologists, geodesists, biologists and engineers, as well as some 300 others started work under the supervision of Andrew Lawson to collect and document physical phenomena related to the quake . On 31 May 1906, the commission published a preliminary 17-page report titled "The Report of the State Earthquake Investigation Commission". The report included the bulk of the geological and morphological descriptions of the faulting, detailed reports on shaking intensity, as well as an impressive atlas of 40 oversized maps and folios. Nearly 100 years after its publication, the Commission Report remains a model for post-earthquake investigations. Because the diverse data sets were so complete and carefully documented, researchers continue to apply modern analysis techniques to learn from the 1906 earthquake. While the earthquake marked a seminal event in the history of California, it served as impetus for the birth of modern earthquake science in the United States.
Bakun, W.H.
2005-01-01
Japan Meteorological Agency (JMA) intensity assignments IJMA are used to derive intensity attenuation models suitable for estimating the location and an intensity magnitude Mjma for historical earthquakes in Japan. The intensity for shallow crustal earthquakes on Honshu is equal to -1.89 + 1.42MJMA - 0.00887?? h - 1.66log??h, where MJMA is the JMA magnitude, ??h = (??2 + h2)1/2, and ?? and h are epicentral distance and focal depth (km), respectively. Four earthquakes located near the Japan Trench were used to develop a subducting plate intensity attenuation model where intensity is equal to -8.33 + 2.19MJMA -0.00550??h - 1.14 log ?? h. The IJMA assignments for the MJMA7.9 great 1923 Kanto earthquake on the Philippine Sea-Eurasian plate interface are consistent with the subducting plate model; Using the subducting plate model and 226 IJMA IV-VI assignments, the location of the intensity center is 25 km north of the epicenter, Mjma is 7.7, and MJMA is 7.3-8.0 at the 1?? confidence level. Intensity assignments and reported aftershock activity for the enigmatic 11 November 1855 Ansei Edo earthquake are consistent with an MJMA 7.2 Philippine Sea-Eurasian interplate source or Philippine Sea intraslab source at about 30 km depth. If the 1855 earthquake was a Philippine Sea-Eurasian interplate event, the intensity center was adjacent to and downdip of the rupture area of the great 1923 Kanto earthquake, suggesting that the 1855 and 1923 events ruptured adjoining sections of the Philippine Sea-Eurasian plate interface.
Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach
Jaiswal, Kishor; Wald, David J.; Hearne, Mike
2009-01-01
We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.
A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.
2014-12-01
A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.
Lacustrine Paleoseismology Reveals Earthquake Segmentation of the Alpine Fault, New Zealand
NASA Astrophysics Data System (ADS)
Howarth, J. D.; Fitzsimons, S.; Norris, R.; Langridge, R. M.
2013-12-01
Transform plate boundary faults accommodate high rates of strain and are capable of producing large (Mw>7.0) to great (Mw>8.0) earthquakes that pose significant seismic hazard. The Alpine Fault in New Zealand is one of the longest, straightest and fastest slipping plate boundary transform faults on Earth and produces earthquakes at quasi-periodic intervals. Theoretically, the fault's linearity, isolation from other faults and quasi-periodicity should promote the generation of earthquakes that have similar magnitudes over multiple seismic cycles. We test the hypothesis that the Alpine Fault produces quasi-regular earthquakes that contiguously rupture the southern and central fault segments, using a novel lacustrine paleoseismic proxy to reconstruct spatial and temporal patterns of fault rupture over the last 2000 years. In three lakes located close to the Alpine Fault the last nine earthquakes are recorded as megaturbidites formed by co-seismic subaqueous slope failures, which occur when shaking exceeds Modified Mercalli (MM) VII. When the fault ruptures adjacent to a lake the co-seismic megaturbidites are overlain by stacks of turbidites produced by enhanced fluvial sediment fluxes from earthquake-induced landslides. The turbidite stacks record shaking intensities of MM>IX in the lake catchments and can be used to map the spatial location of fault rupture. The lake records can be dated precisely, facilitating meaningful along strike correlations, and the continuous records allow earthquakes closely spaced in time on adjacent fault segments to be distinguished. The results show that while multi-segment ruptures of the Alpine Fault occurred during most seismic cycles, sequential earthquakes on adjacent segments and single segment ruptures have also occurred. The complexity of the fault rupture pattern suggests that the subtle variations in fault geometry, sense of motion and slip rate that have been used to distinguish the central and southern segments of the Alpine
Quasi-dynamic earthquake fault systems with rheological heterogeneity
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.
2009-12-01
Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.
NASA Astrophysics Data System (ADS)
Kagawa, T.; Petukhin, A.; Koketsu, K.; Miyake, H.; Murotani, S.; Tsurugi, M.
2010-12-01
Three dimensional velocity structure model of southwest Japan is provided to simulate long-period ground motions due to the hypothetical subduction earthquakes. The model is constructed from numerous physical explorations conducted in land and offshore areas and observational study of natural earthquakes. Any available information is involved to explain crustal structure and sedimentary structure. Figure 1 shows an example of cross section with P wave velocities. The model has been revised through numbers of simulations of small to middle earthquakes as to have good agreement with observed arrival times, amplitudes, and also waveforms including surface waves. Figure 2 shows a comparison between Observed (dash line) and simulated (solid line) waveforms. Low velocity layers have added on seismological basement to reproduce observed records. The thickness of the layer has been adjusted through iterative analysis. The final result is found to have good agreement with the results from other physical explorations; e.g. gravity anomaly. We are planning to make long-period (about 2 to 10 sec or longer) simulations of ground motion due to the hypothetical Nankai Earthquake with the 3-D velocity structure model. As the first step, we will simulate the observed ground motions of the latest event occurred in 1946 to check the source model and newly developed velocity structure model. This project is partly supported by Integrated Research Project for Long-Period Ground Motion Hazard Maps by Ministry of Education, Culture, Sports, Science and Technology (MEXT). The ground motion data used in this study were provided by National Research Institute for Earth Science and Disaster Prevention Disaster (NIED). Figure 1 An example of cross section with P wave velocities Figure 2 Observed (dash line) and simulated (solid line) waveforms due to a small earthquake
NASA Astrophysics Data System (ADS)
D'Onza, F.; Viti, M.; Mantovani, E.; Albarello, D.
2003-04-01
EARTHQUAKE TRIGGERING IN THE PERI-ADRIATIC REGIONS INDUCED BY STRESS DIFFUSION: INSIGHTS FROM NUMERICAL MODELLING F. D’Onza (1), M. Viti (1), E. Mantovani (1) and D. Albarello (1) (1) Dept. of Earth Sciences, University of Siena - Italy (donza@unisi.it/Fax:+39-0577-233820) Significant evidence suggests that major earthquakes in the peri-Adriatic Balkan zones may influence the seismicity pattern in the Italian area. In particular, a seismic correlation has been recognized between major earthquakes in the southern Dinaric belt and those in southern Italy. It is widely recognized that such kind of regularities may be an effect of postseismic relaxation triggered by strong earthquakes. In this note, we describe an attempt to quantitatively investigate, by numerical modelling, the reliability of the above interpretation. In particular, we have explored the possibility to explain the last example of the presumed correlation (triggering event: April, 1979 Montenegro earthquake, MS=6.7; induced event: November, 1980 Irpinia event, MS=6.9) as an effect of postseismic relaxation through the Adriatic plate. The triggering event is modelled by imposing a sudden dislocation in the Montenegro seismic fault, taking into account the fault parameters (length and average slip) recognized from seismological observations. The perturbation induced by the seismic source in the neighbouring lithosphere is obtained by the Elsasser diffusion equation for an elastic lithosphere coupled with a viscous asthenosphere. The results obtained by numerical experiments indicate that the strain regime induced by the Montenegro event in southern Italy is compatible with the tensional strain field observed in this last zone, that the amplitude of the induced strain is significantly higher than that induced by Earth tides and that this amplitude is comparable with the strain perturbation recognized as responsible for earthquake triggering. The time delay between the triggering and the induced
Jibson, Randall W.; Jibson, Matthew W.
2003-01-01
Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.
Pre-earthquake magnetic pulses
NASA Astrophysics Data System (ADS)
Scoville, J.; Heraud, J.; Freund, F.
2015-08-01
A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earthquakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.
Bakun, W.H.; Scotti, O.
2006-01-01
Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.
NASA Astrophysics Data System (ADS)
Croissant, Thomas; Lague, Dimitri; Davy, Philippe; Steer, Philippe
2016-04-01
In active mountain ranges, large earthquakes (Mw > 5-6) trigger numerous landslides that impact river dynamics. These landslides bring local and sudden sediment piles that will be eroded and transported along the river network causing downstream changes in river geometry, transport capacity and erosion efficiency. The progressive removal of landslide materials has implications for downstream hazards management and also for understanding landscape dynamics at the timescale of the seismic cycle. The export time of landslide-derived sediments after large-magnitude earthquakes has been studied from suspended load measurements but a full understanding of the total process, including the coupling between sediment transfer and channel geometry change, still remains an issue. Note that the transport of small sediment pulses has been studied in the context of river restoration, but the magnitude of sediment pulses generated by landslides may make the problem different. Here, we study the export of large volumes (>106 m3) of sediments with the 2D hydro-morphodynamic model, Eros. This model uses a new hydrodynamic module that resolves a reduced form of the Saint-Venant equations with a particle method. It is coupled with a sediment transport and lateral and vertical erosion model. Eros accounts for the complex retroactions between sediment transport and fluvial geometry, with a stochastic description of the floods experienced by the river. Moreover, it is able to reproduce several features deemed necessary to study the evacuation of large sediment pulses, such as river regime modification (single-thread to multi-thread), river avulsion and aggradation, floods and bank erosion. Using a synthetic and simple topography we first present how granulometry, landslide volume and geometry, channel slope and flood frequency influence 1) the dominance of pulse advection vs. diffusion during its evacuation, 2) the pulse export time and 3) the remaining volume of sediment in the catchment
NASA Astrophysics Data System (ADS)
Trugman, Daniel Taylor
The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of
The Model Life-cycle: Training Module
Model Life-Cycle includes identification of problems & the subsequent development, evaluation, & application of the model. Objectives: define ‘model life-cycle’, explore stages of model life-cycle, & strategies for development, evaluation, & applications.
Purposes and methods of scoring earthquake forecasts
NASA Astrophysics Data System (ADS)
Zhuang, J.
2010-12-01
There are two kinds of purposes in the studies on earthquake prediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquake prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquake prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquake prediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.
Recovering the slip history of a scenario earthquake in the Mexican subduction zone
NASA Astrophysics Data System (ADS)
Hjorleifsdottir, V.; Perez-Campos, X.; Iglesias, A.; Cruz-Atienza, V.; Ji, C.; Legrand, D.; Husker, A. L.; Kostoglodov, V.; Valdes Gonzalez, C.
2011-12-01
The Guerrero segment of the Mexican subduction zone has not experienced a large earthquake for almost 100 years (Singh et al., 1981). Due to its proximity to Mexico City, which was devastated by an earthquake in the more distant Michoacan segment in 1985, it has been studied extensively in recent years. Silent slip events have been observed by a local GPS network (Kostoglodov et al. 2003) and seismic observations from a dense linear array of broadband seismometers (MASE) have provided detailed images of the crustal structure of this part of the subduction zone (see for example Pérez-Campos et al., 2008, Iglesias et al., 2010). Interestingly the part of the fault zone that is locked during the inter-seismic period is thought to reach up to or inland from the coast line. In the event of a large megathrust earthquake, this geometry could allow recordings from above the fault interface. These types of recordings can be critical to resolve the history of slip as a function of time on the fault plane during the earthquake. A well constrained model of slip-time history, together with other observations as mentioned above, could provide very valuable insights into earthquake physics and the earthquake cycle. In order to prepare the scientific response for such an event we generate a scenario earthquake in the Guerrero segment of the subduction zone. We calculate synthetic strong motion records, seismograms for global stations and static offsets on the Earth's surface. To simulate the real data available we add real noise, recorded during times of no earthquake, to the synthetic data. We use a simulated annealing inversion algorithm (Ji et al., 1999) to invert the different datasets and combinations thereof for the time-history of slip on the fault plane. We present the recovery of the slip model using the different datasets, as well as idealized datasets, investigating the expected and best possible levels of recovery.
So, Emily; Spence, Robin
2013-01-01
Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.
NASA Astrophysics Data System (ADS)
Grevemeyer, I.; Arroyo, I. G.
2015-12-01
Earthquake source locations are generally routinely constrained using a global 1-D Earth model. However, the source location might be associated with large uncertainties. This is definitively the case for earthquakes occurring at active continental margins were thin oceanic crust subducts below thick continental crust and hence large lateral changes in crustal thickness occur as a function of distance to the deep-sea trench. Here, we conducted a case study of the 2002 Mw 6.4 Osa thrust earthquake in Costa Rica that was followed by an aftershock sequence. Initial relocations indicated that the main shock occurred fairly trenchward of most large earthquakes along the Middle America Trench off central Costa Rica. The earthquake sequence occurred while a temporary network of ocean-bottom-hydrophones and land stations 80 km to the northwest were deployed. By adding readings from permanent Costa Rican stations, we obtain uncommon P wave coverage of a large subduction zone earthquake. We relocated this catalog using a nonlinear probabilistic approach using a 1-D and two 3-D P-wave velocity models. The 3-D model was either derived from 3-D tomography based on onshore stations and a priori model based on seismic refraction data. All epicentres occurred close to the trench axis, but depth estimates vary by several tens of kilometres. Based on the epicentres and constraints from seismic reflection data the main shock occurred 25 km from the trench and probably along the plate interface at 5-10 km depth. The source location that agreed best with the geology was based on the 3-D velocity model derived from a priori data. Aftershocks propagated downdip to the area of a 1999 Mw 6.9 sequence and partially overlapped it. The results indicate that underthrusting of the young and buoyant Cocos Ridge has created conditions for interpolate seismogenesis shallower and closer to the trench axis than elsewhere along the central Costa Rica margin.
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an
How to build and teach with QuakeCaster: an earthquake demonstration and exploration tool
Linton, Kelsey; Stein, Ross S.
2015-01-01
QuakeCaster is an interactive, hands-on teaching model that simulates earthquakes and their interactions along a plate-boundary fault. QuakeCaster contains the minimum number of physical processes needed to demonstrate most observable earthquake features. A winch to steadily reel in a line simulates the steady plate tectonic motions far from the plate boundaries. A granite slider in frictional contact with a nonskid rock-like surface simulates a fault at a plate boundary. A rubber band connecting the line to the slider simulates the elastic character of the Earth’s crust. By stacking and unstacking sliders and cranking in the winch, one can see the results of changing the shear stress and the clamping stress on a fault. By placing sliders in series with rubber bands between them, one can simulate the interaction of earthquakes along a fault, such as cascading or toggling shocks. By inserting a load scale into the line, one can measure the stress acting on the fault throughout the earthquake cycle. As observed for real earthquakes, QuakeCaster events are not periodic, time-predictable, or slip-predictable. QuakeCaster produces rare but unreliable “foreshocks.” When fault gouge builds up, the friction goes to zero and fault creep is seen without large quakes. QuakeCaster events produce very small amounts of fault gouge that strongly alter its behavior, resulting in smaller, more frequent shocks as the gouge accumulates. QuakeCaster is designed so that students or audience members can operate it and record its output. With a stopwatch and ruler one can measure and plot the timing, slip distance, and force results of simulated earthquakes. People of all ages can use the QuakeCaster model to explore hypotheses about earthquake occurrence. QuakeCaster takes several days and about $500.00 in materials to build.
NASA Astrophysics Data System (ADS)
Iwata, T.
2014-12-01
In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the
Implications of fault constitutive properties for earthquake prediction
Dieterich, J.H.; Kilgore, B.
1996-01-01
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.
Implications of fault constitutive properties for earthquake prediction.
Dieterich, J H; Kilgore, B
1996-01-01
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666
Implications of fault constitutive properties for earthquake prediction.
Dieterich, J H; Kilgore, B
1996-04-30
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.
NASA Astrophysics Data System (ADS)
McCormack, K. A.; Hesse, M. A.; Stadler, G.
2015-12-01
Remote sensing and geodetic measurements are providing a new wealth of spatially distributed, time-series data that have the ability to improve our understanding of co-seismic rupture and post-seismic processes in subduction zones. We formulate a Bayesian inverse problem to infer the slip distribution on the plate interface using an elastic finite element model and GPS surface deformation measurements. We present an application to the co-seismic displacement during the 2012 earthquake on the Nicoya Peninsula in Costa Rica, which is uniquely positioned close to the Middle America Trench and directly over the seismogenic zone of the plate interface. The results of our inversion are then used as an initial condition in a coupled poroelastic forward model to investigate the role of poroelastic effects on post-seismic deformation and stress transfer. From this study we identify a horseshoe-shaped rupture area with a maximum slip of approximately 2.5 meters surrounding a locked patch that is likely to release stress in the future. We model the co-seismic pore pressure change as well as the pressure evolution and resulting deformation in the months after the earthquake. The results of the forward model indicate that earthquake-induced pore pressure changes dissipate quickly near the surface, resulting in relaxation of the surface in the seven to ten days following the earthquake. Near the subducting slab interface, pore pressure changes are an order of magnitude larger and may persist for many months after the earthquake.
... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...
Strong Ground Motion Analysis and Afterslip Modeling of Earthquakes near Mendocino Triple Junction
NASA Astrophysics Data System (ADS)
Gong, J.; McGuire, J. J.
2017-12-01
The Mendocino Triple Junction (MTJ) is one of the most seismically active regions in North America in response to the ongoing motions between North America, Pacific and Gorda plates. Earthquakes near the MTJ come from multiple types of faults due to the interaction boundaries between the three plates and the strong internal deformation within them. Understanding the stress levels that drive the earthquake rupture on the various types of faults and estimating the locking state of the subduction interface are especially important for earthquake hazard assessment. However due to lack of direct offshore seismic and geodetic records, only a few earthquakes' rupture processes have been well studied and the locking state of the subducted slab is not well constrained. In this study we first use the second moment inversion method to study the rupture process of the January 28, 2015 Mw 5.7 strike slip earthquake on Mendocino transform fault using strong ground motion records from Cascadia Initiative community experiment as well as onshore seismic networks. We estimate the rupture dimension to be of 6 km by 3 km and a stress drop of 7 MPa on the transform fault. Next we investigate the frictional locking state on the subduction interface through afterslip simulation based on coseismic rupture models of this 2015 earthquake and a Mw 6.5 intraplate eathquake inside Gorda plate whose slip distribution is inverted using onshore geodetic network in previous study. Different depths for velocity strengthening frictional properties to start at the downdip of the locked zone are used to simulate afterslip scenarios and predict the corresponding surface deformation (GPS) movements onshore. Our simulations indicate that locking depth on the slab surface is at least 14 km, which confirms that the next M8 earthquake rupture will likely reach the coastline and strong shaking should be expected near the coast.
Export Time of Earthquake-Derived Landslides in Active Mountain Ranges
NASA Astrophysics Data System (ADS)
Croissant, T.; Lague, D.; Steer, P.; Davy, P.
2016-12-01
In active mountain ranges, large earthquakes (Mw > 5-6) trigger numerous landslides that impact river dynamics. These landslides bring local and sudden sediment deposits which are eroded and transported along the river network, causing downstream changes in river geometry, transport capacity and erosion efficiency. The progressive removal of landslide materials has implications for downstream hazards management and for landscape dynamics at the timescale of the seismic cycle. Although the export time of suspended sediments from landslides triggered by large-magnitude earthquakes has been extensively studied, the processes and time scales associated to bedload transport remains poorly studied. Here, we study the sediment export of large landslides with the 2D morphodynamic model, Eros. This model combines: (i) an hydrodynamic model, (ii) a sediment transport and deposition model and (iii) a lateral erosion model. Eros is particularly well suited for this issue as it accounts for the complex retro-actions between sediment transport and fluvial geometry for rivers submitted to external forcings such as abrupt sediment supply increase. Using a simplified synthetic topography we systematically study the influence of pulse volume (Vs) and channel transport capacity (QT) on the export time of landslides. The range of simulated river behavior includes landslide vertical incision, its subsequent removal by lateral erosion and the river morphology modifications induced by downstream sediment propagation. The morphodynamic adaptation of the river increases its transport capacity along the channel and tends to accelerate the landslide evacuation. Our results highlight two regimes: (i) the export time is linearly related to Vs/QT when the sediment pulse introduced in the river does not affect significantly the river hydrodynamic (low Vs/QT) and (ii) the export time is a non-linear function of Vs/QT when the pulse undergoes significant morphodynamic modifications during its
Barani, Simone; Mascandola, Claudia; Riccomagno, Eva; Spallarossa, Daniele; Albarello, Dario; Ferretti, Gabriele; Scafidi, Davide; Augliera, Paolo; Massa, Marco
2018-03-28
Since the beginning of the 1980s, when Mandelbrot observed that earthquakes occur on 'fractal' self-similar sets, many studies have investigated the dynamical mechanisms that lead to self-similarities in the earthquake process. Interpreting seismicity as a self-similar process is undoubtedly convenient to bypass the physical complexities related to the actual process. Self-similar processes are indeed invariant under suitable scaling of space and time. In this study, we show that long-range dependence is an inherent feature of the seismic process, and is universal. Examination of series of cumulative seismic moment both in Italy and worldwide through Hurst's rescaled range analysis shows that seismicity is a memory process with a Hurst exponent H ≈ 0.87. We observe that H is substantially space- and time-invariant, except in cases of catalog incompleteness. This has implications for earthquake forecasting. Hence, we have developed a probability model for earthquake occurrence that allows for long-range dependence in the seismic process. Unlike the Poisson model, dependent events are allowed. This model can be easily transferred to other disciplines that deal with self-similar processes.
Field Investigations and a Tsunami Modeling for the 1766 Marmara Sea Earthquake, Turkey
NASA Astrophysics Data System (ADS)
Aykurt Vardar, H.; Altinok, Y.; Alpar, B.; Unlu, S.; Yalciner, A. C.
2016-12-01
Turkey is located on one of the world's most hazardous earthquake zones. The northern branch of the North Anatolian fault beneath the Sea of Marmara, where the population is most concentrated, is the most active fault branch at least since late Pliocene. The Sea of Marmara region has been affected by many large tsunamigenic earthquakes; the most destructive ones are 549, 553, 557, 740, 989, 1332, 1343, 1509, 1766, 1894, 1912 and 1999 events. In order to understand and determine the tsunami potential and their possible effects along the coasts of this inland sea, detailed documentary, geophysical and numerical modelling studies are needed on the past earthquakes and their associated tsunamis whose effects are presently unknown.On the northern coast of the Sea of Marmara region, the Kucukcekmece Lagoon has a high potential to trap and preserve tsunami deposits. Within the scope of this study, lithological content, composition and sources of organic matters in the lagoon's bottom sediments were studied along a 4.63 m-long piston core recovered from the SE margin of the lagoon. The sedimentary composition and possible sources of the organic matters along the core were analysed and their results were correlated with the historical events on the basis of dating results. Finally, a tsunami scenario was tested for May 22nd 1766 Marmara Sea Earthquake by using a widely used tsunami simulation model called NAMIDANCE. The results show that the candidate tsunami deposits at the depths of 180-200 cm below the lagoons bottom were related with the 1766 (May) earthquake. This work was supported by the Scientific Research Projects Coordination Unit of Istanbul University (Project 6384) and by the EU project TRANSFER for coring.
Security Implications of Induced Earthquakes
NASA Astrophysics Data System (ADS)
Jha, B.; Rao, A.
2016-12-01
The increase in earthquakes induced or triggered by human activities motivates us to research how a malicious entity could weaponize earthquakes to cause damage. Specifically, we explore the feasibility of controlling the location, timing and magnitude of an earthquake by activating a fault via injection and production of fluids into the subsurface. Here, we investigate the relationship between the magnitude and trigger time of an induced earthquake to the well-to-fault distance. The relationship between magnitude and distance is important to determine the farthest striking distance from which one could intentionally activate a fault to cause certain level of damage. We use our novel computational framework to model the coupled multi-physics processes of fluid flow and fault poromechanics. We use synthetic models representative of the New Madrid Seismic Zone and the San Andreas Fault Zone to assess the risk in the continental US. We fix injection and production flow rates of the wells and vary their locations. We simulate injection-induced Coulomb destabilization of faults and evolution of fault slip under quasi-static deformation. We find that the effect of distance on the magnitude and trigger time is monotonic, nonlinear, and time-dependent. Evolution of the maximum Coulomb stress on the fault provides insights into the effect of the distance on rupture nucleation and propagation. The damage potential of induced earthquakes can be maintained even at longer distances because of the balance between pressure diffusion and poroelastic stress transfer mechanisms. We conclude that computational modeling of induced earthquakes allows us to measure feasibility of weaponzing earthquakes and developing effective defense mechanisms against such attacks.
NASA Astrophysics Data System (ADS)
Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.
2016-12-01
The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw < 5). In 2015, two local earthquakes - Mw4.5 in 03/21/2015 and Mw4.1 in 08/18/2015 - have been recorded by both the Incorporated Research Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw < 5) and are shallow with focal depths of about 2 to 4 km. Such events are very common in oil/gas reservoirs all over the world, including North America, Europe, and the Middle East. We determined the location and source mechanism of these local earthquakes, with the uncertainties, using a Bayesian inversion method. The triggering stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high
SLAMMER: Seismic LAndslide Movement Modeled using Earthquake Records
Jibson, Randall W.; Rathje, Ellen M.; Jibson, Matthew W.; Lee, Yong W.
2013-01-01
This program is designed to facilitate conducting sliding-block analysis (also called permanent-deformation analysis) of slopes in order to estimate slope behavior during earthquakes. The program allows selection from among more than 2,100 strong-motion records from 28 earthquakes and allows users to add their own records to the collection. Any number of earthquake records can be selected using a search interface that selects records based on desired properties. Sliding-block analyses, using any combination of rigid-block (Newmark), decoupled, and fully coupled methods, are then conducted on the selected group of records, and results are compiled in both graphical and tabular form. Simplified methods for conducting each type of analysis are also included.
NASA Astrophysics Data System (ADS)
Copley, Alex; Grützner, Christoph; Howell, Andy; Jackson, James; Penney, Camilla; Wimpenny, Sam
2018-03-01
High-resolution elevation models, palaeoseismic trenching, and Quaternary dating demonstrate that the Kenchreai Fault in the eastern Gulf of Corinth (Greece) has ruptured in the Holocene. Along with the adjacent Pisia and Heraion Faults (which ruptured in 1981), our results indicate the presence of closely-spaced and parallel normal faults that are simultaneously active, but at different rates. Such a configuration allows us to address one of the major questions in understanding the earthquake cycle, specifically what controls the distribution of interseismic strain accumulation? Our results imply that the interseismic loading and subsequent earthquakes on these faults are governed by weak shear zones in the underlying ductile crust. In addition, the identification of significant earthquake slip on a fault that does not dominate the late Quaternary geomorphology or vertical coastal motions in the region provides an important lesson in earthquake hazard assessment.
Two critical tests for the Critical Point earthquake
NASA Astrophysics Data System (ADS)
Tzanis, A.; Vallianatos, F.
2003-04-01
It has been credibly argued that the earthquake generation process is a critical phenomenon culminating with a large event that corresponds to some critical point. In this view, a great earthquake represents the end of a cycle on its associated fault network and the beginning of a new one. The dynamic organization of the fault network evolves as the cycle progresses and a great earthquake becomes more probable, thereby rendering possible the prediction of the cycle’s end by monitoring the approach of the fault network toward a critical state. This process may be described by a power-law time-to-failure scaling of the cumulative seismic release rate. Observational evidence has confirmed the power-law scaling in many cases and has empirically determined that the critical exponent in the power law is typically of the order n=0.3. There are also two theoretical predictions for the value of the critical exponent. Ben-Zion and Lyakhovsky (Pure appl. geophys., 159, 2385-2412, 2002) give n=1/3. Rundle et al. (Pure appl. geophys., 157, 2165-2182, 2000) show that the power-law activation associated with a spinodal instability is essentially identical to the power-law acceleration of Benioff strain observed prior to earthquakes; in this case n=0.25. More recently, the CP model has gained support from the development of more dependable models of regional seismicity with realistic fault geometry that show accelerating seismicity before large events. Essentially, these models involve stress transfer to the fault network during the cycle such, that the region of accelerating seismicity will scale with the size of the culminating event, as for instance in Bowman and King (Geophys. Res. Let., 38, 4039-4042, 2001). It is thus possible to understand the observed characteristics of distributed accelerating seismicity in terms of a simple process of increasing tectonic stress in a region already subjected to stress inhomogeneities at all scale lengths. Then, the region of
NASA Astrophysics Data System (ADS)
Shimizu, K.; Yagi, Y.; Okuwaki, R.; Kasahara, A.
2017-12-01
The kinematic earthquake rupture models are useful to derive statistics and scaling properties of the large and great earthquakes. However, the kinematic rupture models for the same earthquake are often different from one another. Such sensitivity of the modeling prevents us to understand the statistics and scaling properties of the earthquakes. Yagi and Fukahata (2011) introduces the uncertainty of Green's function into the tele-seismic waveform inversion, and shows that the stable spatiotemporal distribution of slip-rate can be obtained by using an empirical Bayesian scheme. One of the unsolved problems in the inversion rises from the modeling error originated from an uncertainty of a fault-model setting. Green's function near the nodal plane of focal mechanism is known to be sensitive to the slight change of the assumed fault geometry, and thus the spatiotemporal distribution of slip-rate should be distorted by the modeling error originated from the uncertainty of the fault model. We propose a new method accounting for the complexity in the fault geometry by additionally solving the focal mechanism on each space knot. Since a solution of finite source inversion gets unstable with an increasing of flexibility of the model, we try to estimate a stable spatiotemporal distribution of focal mechanism in the framework of Yagi and Fukahata (2011). We applied the proposed method to the 52 tele-seismic P-waveforms of the 2013 Balochistan, Pakistan earthquake. The inverted-potency distribution shows unilateral rupture propagation toward southwest of the epicenter, and the spatial variation of the focal mechanisms shares the same pattern as the fault-curvature along the tectonic fabric. On the other hand, the broad pattern of rupture process, including the direction of rupture propagation, cannot be reproduced by an inversion analysis under the assumption that the faulting occurred on a single flat plane. These results show that the modeling error caused by simplifying the
Multi-segment earthquakes and tsunami potential of the Aleutian megathrust
Shennan, I.; Bruhn, R.; Plafker, G.
2009-01-01
Large to great earthquakes and related tsunamis generated on the Aleutian megathrust produce major hazards for both the area of rupture and heavily populated coastlines around much of the Pacific Ocean. Here we use paleoseismic records preserved in coastal sediments to investigate whether segment boundaries control the largest ruptures or whether in some seismic cycles segments combine to produce earthquakes greater than any observed since instrumented records began. Virtually the entire megathrust has ruptured since AD1900, with four different segments generating earthquakes >M8.0. The largest was the M9.2 great Alaska earthquake of March 1964 that ruptured ???800 km of the eastern segment of the megathrust. The tsunami generated caused fatalities in Alaska and along the coast as far south as California. East of the 1964 zone of deformation, the Yakutat microplate experienced two >M8.0 earthquakes, separated by a week, in September 1899. For the first time, we present evidence that earthquakes ???900 and ???1500 years ago simultaneously ruptured adjacent segments of the Aleutian megathrust and the Yakutat microplate, with a combined area ???15% greater than 1964, giving an earthquake of greater magnitude and increased tsunamigenic potential. ?? 2008 Elsevier Ltd. All rights reserved.
Local tsunamis and earthquake source parameters
Geist, Eric L.; Dmowska, Renata; Saltzman, Barry
1999-01-01
This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.
Earth's rotation variations and earthquakes 2010-2011
NASA Astrophysics Data System (ADS)
Ostřihanský, L.
2012-01-01
19 years earlier in difference only one day to 27 December 1985 earthquake, proving that not only sidereal 13.66 days variations but also that the 19 years Metons cycle is the period of the earthquakes occurrence. Histograms show the regular change of earthquake positions on branches of LOD graph and also the shape of histogram and number of earthquakes on LOD branches from the mid-ocean ridge can show which side of the ridge moves quicker.
NASA Astrophysics Data System (ADS)
Daniell, James; Schaefer, Andreas; Wenzel, Friedemann; Khazai, Bijan; Girard, Trevor; Kunz-Plapp, Tina; Kunz, Michael; Muehr, Bernhard
2016-04-01
Over the days following the 2015 Nepal earthquake, rapid loss estimates of deaths and the economic loss and reconstruction cost were undertaken by our research group in conjunction with the World Bank. This modelling relied on historic losses from other Nepal earthquakes as well as detailed socioeconomic data and earthquake loss information via CATDAT. The modelled results were very close to the final death toll and reconstruction cost for the 2015 earthquake of around 9000 deaths and a direct building loss of ca. 3 billion (a). A description of the process undertaken to produce these loss estimates is described and the potential for use in analysing reconstruction costs from future Nepal earthquakes in rapid time post-event. The reconstruction cost and death toll model is then used as the base model for the examination of the effect of spending money on earthquake retrofitting of buildings versus complete reconstruction of buildings. This is undertaken future events using empirical statistics from past events along with further analytical modelling. The effects of investment vs. the time of a future event is also explored. Preliminary low-cost options (b) along the line of other country studies for retrofitting (ca. 100) are examined versus the option of different building typologies in Nepal as well as investment in various sectors of construction. The effect of public vs. private capital expenditure post-earthquake is also explored as part of this analysis, as well as spending on other components outside of earthquakes. a) http://www.scientificamerican.com/article/experts-calculate-new-loss-predictions-for-nepal-quake/ b) http://www.aees.org.au/wp-content/uploads/2015/06/23-Daniell.pdf
Earthquakes, fluid pressures and rapid subduction zone metamorphism
NASA Astrophysics Data System (ADS)
Viete, D. R.
2013-12-01
High-pressure/low-temperature (HP/LT) metamorphism is commonly incomplete, meaning that large tracts of rock can remain metastable at blueschist- and eclogite-facies conditions for timescales up to millions of years [1]. When HP/LT metamorphism does take place, it can occur over extremely short durations (<<1 Myr) [1-2]. HP/LT metamorphism must be associated with processes that allow large volumes of rock to remain unaffected over long periods of time, but then suddenly undergo localized metamorphism. Existing models for HP/LT metamorphism have focussed on the role of fluids in providing heat for metamorphism [2] or catalyzing metamorphic reactions [1]. Earthquakes in subduction zone settings can occur to depths of 100s of km. Metamorphic dehydration and the associated development of elevated pore pressures in HP/LT metamorphic rocks has been identified as a cause of earthquake activity at such great depths [3-4]. The process of fracturing/faulting significantly increases rock permeability, causing channelized fluid flow and dissipation of pore pressures [3-4]. Thus, deep subduction zone earthquakes are thought to reflect an evolution in fluid pressure, involving: (1) an initial increase in pore pressure by heating-related dehydration of subduction zone rocks, and (2) rapid relief of pore pressures by faulting and channelized flow. Models for earthquakes at depth in subduction zones have focussed on the in situ effects of dehydration and then sudden escape of fluids from the rock mass following fracturing [3-4]. On the other hand, existing models for rapid and incomplete metamorphism in subduction zones have focussed only on the effects of heating and/or hydration with the arrival of external fluids [1-2]. Significant changes in pressure over very short timescales should result in rapid mineral growth and/or disequilibrium texture development in response to overstepping of mineral reaction boundaries. The repeated process of dehydration-pore pressure development-earthquake
Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region
NASA Astrophysics Data System (ADS)
Chaudhary, Chhavi; Sharma, Mukat Lal
2017-12-01
Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.
Ren, Junjie; Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.
Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524
ERIC Educational Resources Information Center
Walter, Edward J.
1977-01-01
Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)
Temporal stress changes caused by earthquakes: A review
Hardebeck, Jeanne L.; Okada, Tomomi
2018-01-01
Earthquakes can change the stress field in the Earth’s lithosphere as they relieve and redistribute stress. Earthquake-induced stress changes have been observed as temporal rotations of the principal stress axes following major earthquakes in a variety of tectonic settings. The stress changes due to the 2011 Mw9.0 Tohoku-Oki, Japan, earthquake were particularly well documented. Earthquake stress rotations can inform our understanding of earthquake physics, most notably addressing the long-standing problem of whether the Earth’s crust at plate boundaries is “strong” or “weak.” Many of the observed stress rotations, including that due to the Tohoku-Oki earthquake, indicate near-complete stress drop in the mainshock. This implies low background differential stress, on the order of earthquake stress drop, supporting the weak crust model. Earthquake stress rotations can also be used to address other important geophysical questions, such as the level of crustal stress heterogeneity and the mechanisms of postseismic stress reloading. The quantitative interpretation of stress rotations is evolving from those based on simple analytical methods to those based on more sophisticated numerical modeling that can capture the spatial-temporal complexity of the earthquake stress changes.
Temporal Stress Changes Caused by Earthquakes: A Review
NASA Astrophysics Data System (ADS)
Hardebeck, Jeanne L.; Okada, Tomomi
2018-02-01
Earthquakes can change the stress field in the Earth's lithosphere as they relieve and redistribute stress. Earthquake-induced stress changes have been observed as temporal rotations of the principal stress axes following major earthquakes in a variety of tectonic settings. The stress changes due to the 2011 Mw9.0 Tohoku-Oki, Japan, earthquake were particularly well documented. Earthquake stress rotations can inform our understanding of earthquake physics, most notably addressing the long-standing problem of whether the Earth's crust at plate boundaries is "strong" or "weak." Many of the observed stress rotations, including that due to the Tohoku-Oki earthquake, indicate near-complete stress drop in the mainshock. This implies low background differential stress, on the order of earthquake stress drop, supporting the weak crust model. Earthquake stress rotations can also be used to address other important geophysical questions, such as the level of crustal stress heterogeneity and the mechanisms of postseismic stress reloading. The quantitative interpretation of stress rotations is evolving from those based on simple analytical methods to those based on more sophisticated numerical modeling that can capture the spatial-temporal complexity of the earthquake stress changes.
Source model for the Mw 6.7, 23 October 2002, Nenana Mountain Earthquake (Alaska) from InSAR
Wright, Tim J.; Lu, Z.; Wicks, Charles
2003-01-01
The 23 October 2002 Nenana Mountain Earthquake (Mw ∼ 6.7) occurred on the Denali Fault (Alaska), to the west of the Mw ∼ 7.9 Denali Earthquake that ruptured the same fault 11 days later. We used 6 interferograms, constructed using radar images from the Canadian Radarsat-1 and European ERS-2 satellites, to determine the coseismic surface deformation and a source model. Data were acquired on ascending and descending satellite passes, with incidence angles between 23 and 45 degrees, and time intervals of 72 days or less. Modeling the event as dislocations in an elastic half space suggests that there was nearly 0.9 m of right-lateral strike-slip motion at depth, on a near-vertical fault, and that the maximum slip in the top 4 km of crust was less than 0.2 m. The Nenana Mountain Earthquake increased the Coulomb stress at the future hypocenter of the 3 November 2002, Denali Earthquake by 30–60 kPa.
Source model for the Mw 6.7, 23 October 2002, Nenana Mountain Earthquake (Alaska) from InSAR
Wright, Tim J.; Lu, Zhong; Wicks, Chuck
2003-01-01
The 23 October 2002 Nenana Mountain Earthquake (Mw ∼ 6.7) occurred on the Denali Fault (Alaska), to the west of the Mw ∼ 7.9 Denali Earthquake that ruptured the same fault 11 days later. We used 6 interferograms, constructed using radar images from the Canadian Radarsat-1 and European ERS-2 satellites, to determine the coseismic surface deformation and a source model. Data were acquired on ascending and descending satellite passes, with incidence angles between 23 and 45 degrees, and time intervals of 72 days or less. Modeling the event as dislocations in an elastic half space suggests that there was nearly 0.9 m of right-lateral strike-slip motion at depth, on a near-vertical fault, and that the maximum slip in the top 4 km of crust was less than 0.2 m. The Nenana Mountain Earthquake increased the Coulomb stress at the future hypocenter of the 3 November 2002, Denali Earthquake by 30–60 kPa.
NASA Astrophysics Data System (ADS)
Miura, S.; Ohta, Y.; Ohzono, M.; Kita, S.; Iinuma, T.; Demachi, T.; Tachibana, K.; Nakayama, T.; Hirahara, S.; Suzuki, S.; Sato, T.; Uchida, N.; Hasegawa, A.; Umino, N.
2011-12-01
We propose a source fault model of the large intraslab earthquake with M7.1 deduced from a dense GPS network. The coseismic displacements obtained by GPS data analysis clearly show the spatial pattern specific to intraslab earthquakes not only in the horizontal components but also the vertical ones. A rectangular fault with uniform slip was estimated by a non-linear inversion approach. The results indicate that the simple rectangular fault model can explain the overall features of the observations. The amount of moment released is equivalent to Mw 7.17. The hypocenter depth of the main shock estimated by the Japan Meteorological Agency is slightly deeper than the neutral plane between down-dip compression (DC) and down-dip extension (DE) stress zones of the double-planed seismic zone. This suggests that the depth of the neutral plane was deepened by the huge slip of the 2011 M9.0 Tohoku earthquake, and the rupture of the thrust M7.1 earthquake was initiated at that depth, although more investigations are required to confirm this idea. The estimated fault plane has an angle of ~60 degrees from the surface of subducting Pacific plate. It is consistent with the hypothesis that intraslab earthquakes are thought to be reactivation of the preexisting hydrated weak zones made in bending process of oceanic plates around outer-rise regions.
Web-Based Real Time Earthquake Forecasting and Personal Risk Management
NASA Astrophysics Data System (ADS)
Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.
2012-12-01
Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and
NASA Astrophysics Data System (ADS)
Stein, R. S.
2012-12-01
The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by
Physics of Earthquake Rupture Propagation
NASA Astrophysics Data System (ADS)
Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh
2018-05-01
A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.
"ABC's Earthquake" (Experiments and models in seismology)
NASA Astrophysics Data System (ADS)
Almeida, Ana
2017-04-01
Ana Almeida, Portugal Almeida, Ana Escola Básica e Secundária Dr. Vieira de Carvalho Moreira da Maia, Portugal The purpose of this presentation, in poster format, is to disclose an activity which was planned and made by me, in a school on the north of Portugal, using a kit of materials simple and easy to use - the sismo-box. The activity "ABC's Earthquake" was developed under the discipline of Natural Sciences, with students from 7th grade, geosciences teachers and other areas. The possibility of work with the sismo-box was seen as an exciting and promising opportunity to promote science, seismology more specifically, to do science, when using the existing models in the box and with them implement the scientific method, to work and consolidate content and skills in the area of Natural Sciences, to have a time of sharing these materials with classmates, and also with other teachers from the different areas. Throughout the development of the activity, either with students or teachers, it was possible to see the admiration by the models presented in the earthquake-box, as well as, the interest and the enthusiasm in wanting to move and understand what the results after the proposed procedure in the script. With this activity, we managed to promote: - educational success in this subject; a "school culture" with active participation, with quality, rules, discipline and citizenship values; fully integration of students with special educational needs; strengthen the performance of the school as a cultural, informational and formation institution; provide activities to date and innovative; foment knowledge "to be, being and doing" and contribute to a moment of joy and discovery.Learn by doing!
Slip model and Synthetic Broad-band Strong Motions for the 2015 Mw 8.3 Illapel (Chile) Earthquake.
NASA Astrophysics Data System (ADS)
Aguirre, P.; Fortuno, C.; de la Llera, J. C.
2017-12-01
The MW 8.3 earthquake that occurred on September 16th 2015 west of Illapel, Chile, ruptured a 200 km section of the plate boundary between 29º S and 33º S. SAR data acquired by the Sentinel 1A satellite was used to obtain the interferogram of the earthquake, and from it, the component of the displacement field of the surface in the line of sight of the satellite. Based on this interferogram, the corresponding coseismic slip distribution for the earthquake was determined based on different plausible finite fault geometries. The model that best fits the data gathered is one whose rupture surface is consistent with the Slab 1.0 model, with a constant strike angle of 4º and variable dip angle ranging from 2.7º near the trench to 24.3º down dip. Using this geometry the maximum slip obtained is 7.52 m and the corresponding seismic moment is 3.78·1021 equivalent to a moment magnitude Mw 8.3. Calculation of the Coulomb failure stress change induced by this slip distribution evidences a strong correlation between regions where stress is increased as consequence of the earthquake, and the occurrence of the most relevant aftershocks, providing a consistency check for the inversion procedure applied and its results.The finite fault model for the Illapel earthquake is used to test a hybrid methodology for generation of synthetic ground motions that combines a deterministic calculation of the low frequency content, with stochastic modelling of the high frequency signal. Strong ground motions are estimated at the location of seismic stations recording the Illapel earthquake. Such simulations include the effect of local soil conditions, which are modelled empirically based on H/V ratios obtained from a large database of historical seismic records. Comparison of observed and synthetic records based on the 5%-damped response spectra yield satisfactory results for locations where the site response function is more robustly estimated.
ERIC Educational Resources Information Center
Pakiser, Louis C.
One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…
Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.
Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A
2015-01-27
Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.
NASA Astrophysics Data System (ADS)
Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said
2010-05-01
The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.
Uniform California earthquake rupture forecast, version 2 (UCERF 2)
Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.
2009-01-01
The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.
Areas prone to slow slip events impede earthquake rupture propagation and promote afterslip.
Rolandone, Frederique; Nocquet, Jean-Mathieu; Mothes, Patricia A; Jarrin, Paul; Vallée, Martin; Cubas, Nadaya; Hernandez, Stephen; Plain, Morgan; Vaca, Sandro; Font, Yvonne
2018-01-01
At subduction zones, transient aseismic slip occurs either as afterslip following a large earthquake or as episodic slow slip events during the interseismic period. Afterslip and slow slip events are usually considered as distinct processes occurring on separate fault areas governed by different frictional properties. Continuous GPS (Global Positioning System) measurements following the 2016 M w (moment magnitude) 7.8 Ecuador earthquake reveal that large and rapid afterslip developed at discrete areas of the megathrust that had previously hosted slow slip events. Regardless of whether they were locked or not before the earthquake, these areas appear to persistently release stress by aseismic slip throughout the earthquake cycle and outline the seismic rupture, an observation potentially leading to a better anticipation of future large earthquakes.
Areas prone to slow slip events impede earthquake rupture propagation and promote afterslip
Rolandone, Frederique; Nocquet, Jean-Mathieu; Mothes, Patricia A.; Jarrin, Paul; Vallée, Martin; Cubas, Nadaya; Hernandez, Stephen; Plain, Morgan; Vaca, Sandro; Font, Yvonne
2018-01-01
At subduction zones, transient aseismic slip occurs either as afterslip following a large earthquake or as episodic slow slip events during the interseismic period. Afterslip and slow slip events are usually considered as distinct processes occurring on separate fault areas governed by different frictional properties. Continuous GPS (Global Positioning System) measurements following the 2016 Mw (moment magnitude) 7.8 Ecuador earthquake reveal that large and rapid afterslip developed at discrete areas of the megathrust that had previously hosted slow slip events. Regardless of whether they were locked or not before the earthquake, these areas appear to persistently release stress by aseismic slip throughout the earthquake cycle and outline the seismic rupture, an observation potentially leading to a better anticipation of future large earthquakes. PMID:29404404
NASA Astrophysics Data System (ADS)
Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.
2012-04-01
One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast
NASA Astrophysics Data System (ADS)
Brader, Martin; Shennan, Ian; Barlow, Natasha; Davies, Frank; Longley, Chris; Tunstall, Neil
2017-04-01
Recent paleoseismological studies question whether segment boundaries identified for 20th and 21st century great, >M 8, earthquakes persist through multiple earthquake cycles, or whether smaller segments with different boundaries rupture and cause significant hazards. The smaller segments may include some that are currently slipping rather than locked. The 1964 Alaska M 9.2 earthquake was the largest of five earthquakes of >M 7.9 between 1938 and 1965 along the Aleutian chain and coast of southcentral Alaska that helped define models of rupture segments along the Alaska-Aleutian megathrust. The 1964 M 9.2 earthquake ruptured ˜950 km of the megathrust, involving two main asperities focussed on Kodiak Island and Prince William Sound and crossed the Kenai segment, which is currently creeping. Paleoseismic studies of coastal sediments currently provide a long record of previous large earthquakes for the Prince William Sound segment, with widespread evidence of seven great earthquakes in the last 4000 years and more restricted evidence for three earlier ones. Shorter and more fragmentary records from the Kenai Peninsula, Yakataga and Kodiak Archipelago raise the hypothesis of different patterns of surface deformation during past great earthquakes. We present new evidence from coastal wetlands on Shuyak Island, towards the hypothesised north-eastern boundary of the Kodiak segment, to illustrate different detection limits of paleoseismic indicators and how these influence the identification of segment boundaries in late Holocene earthquakes. We compare predictions of co-seismic uplift and subsidence derived from geophysical models of earthquakes with different rupture modes. The spatial patterns of agreement and misfit between model predictions and quantitative reconstructions of co-seismic submergence and emergence suggest that no earthquake within the last 4000 years had the same rupture pattern as the 1964 M 9.2 earthquake.
Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination
2008-09-01
explosions from earthquakes, using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites ...Battone et al., 2002). For example, in Figure 1 we compare an earthquake and an explosion at each of four major test sites (rows), bandpass filtered...explosions as the frequency increases. Note also there are interesting differences between the test sites , indicating that emplacement conditions (depth
A prospective earthquake forecast experiment in the western Pacific
NASA Astrophysics Data System (ADS)
Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan
2012-09-01
Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.
Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model
Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,
2013-01-01
In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of
Integrated urban water cycle management: the UrbanCycle model.
Hardy, M J; Kuczera, G; Coombes, P J
2005-01-01
Integrated urban water cycle management presents a new framework in which solutions to the provision of urban water services can be sought. It enables new and innovative solutions currently constrained by the existing urban water paradigm to be implemented. This paper introduces the UrbanCycle model. The model is being developed in response to the growing and changing needs of the water management sector and in light of the need for tools to evaluate integrated watercycle management approaches. The key concepts underpinning the UrbanCycle model are the adoption of continuous simulation, hierarchical network modelling, and the careful management of computational complexity. The paper reports on the integration of modelling capabilities across the allotment, and subdivision scales, enabling the interactions between these scales to be explored. A case study illustrates the impacts of various mitigation measures possible under an integrated water management framework. The temporal distribution of runoff into ephemeral streams from a residential allotment in Western Sydney is evaluated and linked to the geomorphic and ecological regimes in receiving waters.
On to what extent stresses resulting from the earth's surface trigger earthquakes
NASA Astrophysics Data System (ADS)
Klose, C. D.
2009-12-01
The debate on static versus dynamic earthquake triggering mainly concentrates on endogenous crustal forces, including fault-fault interactions or seismic wave transients of remote earthquakes. Incomprehensibly, earthquake triggering due to surface processes, however, still receives little scientific attention. This presentation continues a discussion on the hypothesis of how “tiny” stresses stemming from the earth's surface can trigger major earthquakes, such as for example, China's M7.9 Wenchuan earthquake of May 2008. This seismic event is thought to be triggered by up to 1.1 billion metric tons of water (~130m) that accumulated in the Minjiang River Valley at the eastern margin of the Longmen Shan. Specifically, the water level rose by ~80m (static), with additional seasonal water level changes of ~50m (dynamic). Two and a half years prior to mainshock, static and dynamic Coulomb failure stresses were induced on the nearby Beichuan thrust fault system at <17km depth. Triggering stresses were equivalent to levels of daily tides and perturbed a fault area measuring 416+/-96km^2. The mainshock ruptured after 2.5 years when only the static stressing regime was predominant and the transient stressing (seasonal water level) was infinitesimal small. The short triggering delay of about 2 years suggests that the Beichuan fault might have been near the end of its seismic cycle, which may also confirm what previous geological findings have indicated. This presentation shows on to what extend the static and 1-year periodic triggering stress perturbations a) accounted for equivalent tectonic loading, given a 4-10kyr earthquake cycle and b) altered the background seismicity beneath the valley, i.e., daily event rate and earthquake size distribution.
Designing an Earthquake-Resistant Building
ERIC Educational Resources Information Center
English, Lyn D.; King, Donna T.
2016-01-01
How do cross-bracing, geometry, and base isolation help buildings withstand earthquakes? These important structural design features involve fundamental geometry that elementary school students can readily model and understand. The problem activity, Designing an Earthquake-Resistant Building, was undertaken by several classes of sixth- grade…
Mega-earthquakes rupture flat megathrusts.
Bletery, Quentin; Thomas, Amanda M; Rempel, Alan W; Karlstrom, Leif; Sladen, Anthony; De Barros, Louis
2016-11-25
The 2004 Sumatra-Andaman and 2011 Tohoku-Oki earthquakes highlighted gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution: A fast convergence rate and young buoyant lithosphere are not required to produce mega-earthquakes. We calculated the curvature along the major subduction zones of the world, showing that mega-earthquakes preferentially rupture flat (low-curvature) interfaces. A simplified analytic model demonstrates that heterogeneity in shear strength increases with curvature. Shear strength on flat megathrusts is more homogeneous, and hence more likely to be exceeded simultaneously over large areas, than on highly curved faults. Copyright © 2016, American Association for the Advancement of Science.
On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios
2013-04-01
The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006
Examination of Solar Cycle Statistical Model and New Prediction of Solar Cycle 23
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.
2000-01-01
Sunspot numbers in the current solar cycle 23 were estimated by using a statistical model with the accumulating cycle sunspot data based on the odd-even behavior of historical sunspot cycles from 1 to 22. Since cycle 23 has progressed and the accurate solar minimum occurrence has been defined, the statistical model is validated by comparing the previous prediction with the new measured sunspot number; the improved sunspot projection in short range of future time is made accordingly. The current cycle is expected to have a moderate level of activity. Errors of this model are shown to be self-correcting as cycle observations become available.
Relation of landslides triggered by the Kiholo Bay earthquake to modeled ground motion
Harp, Edwin L.; Hartzell, Stephen H.; Jibson, Randall W.; Ramirez-Guzman, L.; Schmitt, Robert G.
2014-01-01
The 2006 Kiholo Bay, Hawaii, earthquake triggered high concentrations of rock falls and slides in the steep canyons of the Kohala Mountains along the north coast of Hawaii. Within these mountains and canyons a complex distribution of landslides was triggered by the earthquake shaking. In parts of the area, landslides were preferentially located on east‐facing slopes, whereas in other parts of the canyons no systematic pattern prevailed with respect to slope aspect or vertical position on the slopes. The geology within the canyons is homogeneous, so we hypothesize that the variable landslide distribution is the result of localized variation in ground shaking; therefore, we used a state‐of‐the‐art, high‐resolution ground‐motion simulation model to see if it could reproduce the landslide‐distribution patterns. We used a 3D finite‐element analysis to model earthquake shaking using a 10 m digital elevation model and slip on a finite‐fault model constructed from teleseismic records of the mainshock. Ground velocity time histories were calculated up to a frequency of 5 Hz. Dynamic shear strain also was calculated and compared with the landslide distribution. Results were mixed for the velocity simulations, with some areas showing correlation of landslide locations with peak modeled ground motions but many other areas showing no such correlation. Results were much improved for the comparison with dynamic shear strain. This suggests that (1) rock falls and slides are possibly triggered by higher frequency ground motions (velocities) than those in our simulations, (2) the ground‐motion velocity model needs more refinement, or (3) dynamic shear strain may be a more fundamental measurement of the decoupling process of slope materials during seismic shaking.
Towards coupled earthquake dynamic rupture and tsunami simulations: The 2011 Tohoku earthquake.
NASA Astrophysics Data System (ADS)
Galvez, Percy; van Dinther, Ylona
2016-04-01
The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given an unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds suggesting two rupture fronts, possibly due to slip reactivation caused by frictional melting and thermal fluid pressurization effects. We created a 3D dynamic rupture model to reproduce this rupture reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops (Galvez et al, 2015) . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The seismograms agree roughly with seismic records along the coast of Japan. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The simulated sea floor displacement reaches 8-10 meters of uplift close to the trench, which may be the cause of such a devastating tsunami followed by the Tohoku earthquake. To investigate the impact of such a huge uplift, we ran tsunami simulations with the slip reactivation model and plug the sea floor displacements into GeoClaw (Finite element code for tsunami simulations, George and LeVeque, 2006). Our recent results compare well with the water height at the tsunami DART buoys 21401, 21413, 21418 and 21419 and show the potential using fully dynamic rupture results for tsunami studies for earthquake-tsunami scenarios.
Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.
Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M
2017-01-01
The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the dual process model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the dual process model.
Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.
Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M
2017-01-01
The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the Dual Process Model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the Dual Process Model.
A prospective earthquake forecast experiment for Japan
NASA Astrophysics Data System (ADS)
Yokoi, Sayoko; Nanjo, Kazuyoshi; Tsuruoka, Hiroshi; Hirata, Naoshi
2013-04-01
One major focus of the current Japanese earthquake prediction research program (2009-2013) is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. On 1 November in 2009, we started the 1st earthquake forecast testing experiment for the Japan area. We use the unified JMA catalogue compiled by the Japan Meteorological Agency as authorized catalogue. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called All Japan, Mainland, and Kanto. A total of 91 models were submitted to CSEP-Japan, and are evaluated with the CSEP official suite of tests about forecast performance. In this presentation, we show the results of the experiment of the 3-month testing class for 5 rounds. HIST-ETAS7pa, MARFS and RI10K models corresponding to the All Japan, Mainland and Kanto regions showed the best score based on the total log-likelihood. It is also clarified that time dependency of model parameters is no effective factor to pass the CSEP consistency tests for the 3-month testing class in all regions. Especially, spatial distribution in the All Japan region was too difficult to pass consistency test due to multiple events at a bin. Number of target events for a round in the Mainland region tended to be smaller than model's expectation during all rounds, which resulted in rejections of consistency test because of overestimation. In the Kanto region, pass ratios of consistency tests in each model showed more than 80%, which was associated with good balanced forecasting of event
NASA Astrophysics Data System (ADS)
Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria
2014-05-01
. The usable and realistic ground motion maps for urban areas are generated: - either from the assumption of a "reference earthquake" - or directly, showing values of macroseimic intensity generated by a damaging, real earthquake. In the study, applying deterministic approach, earthquake scenario in macroseismic intensity ("model" earthquake scenario) for the city of Sofia is generated. The deterministic "model" intensity scenario based on assumption of a "reference earthquake" is compared with a scenario based on observed macroseimic effects caused by the damaging 2012 earthquake (MW5.6). The difference between observed (Io) and predicted (Ip) intensities values is analyzed.
Turkish Compulsory Earthquake Insurance (TCIP)
NASA Astrophysics Data System (ADS)
Erdik, M.; Durukal, E.; Sesetyan, K.
2009-04-01
Through a World Bank project a government-sponsored Turkish Catastrophic Insurance Pool (TCIP) is created in 2000 with the essential aim of transferring the government's financial burden of replacing earthquake-damaged housing to international reinsurance and capital markets. Providing coverage to about 2.9 Million homeowners TCIP is the largest insurance program in the country with about 0.5 Billion USD in its own reserves and about 2.3 Billion USD in total claims paying capacity. The total payment for earthquake damage since 2000 (mostly small, 226 earthquakes) amounts to about 13 Million USD. The country-wide penetration rate is about 22%, highest in the Marmara region (30%) and lowest in the south-east Turkey (9%). TCIP is the sole-source provider of earthquake loss coverage up to 90,000 USD per house. The annual premium, categorized on the basis of earthquake zones type of structure, is about US90 for a 100 square meter reinforced concrete building in the most hazardous zone with 2% deductible. The earthquake engineering related shortcomings of the TCIP is exemplified by fact that the average rate of 0.13% (for reinforced concrete buildings) with only 2% deductible is rather low compared to countries with similar earthquake exposure. From an earthquake engineering point of view the risk underwriting (Typification of housing units to be insured, earthquake intensity zonation and the sum insured) of the TCIP needs to be overhauled. Especially for large cities, models can be developed where its expected earthquake performance (and consequently the insurance premium) can be can be assessed on the basis of the location of the unit (microzoned earthquake hazard) and basic structural attributes (earthquake vulnerability relationships). With such an approach, in the future the TCIP can contribute to the control of construction through differentiation of premia on the basis of earthquake vulnerability.
NASA Astrophysics Data System (ADS)
Viens, L.; Miyake, H.; Koketsu, K.
2016-12-01
Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.
Brocher, Thomas M.; Blakely, Richard J.; Sherrod, Brian
2017-01-01
We investigate spatial and temporal relations between an ongoing and prolific seismicity cluster in central Washington, near Entiat, and the 14 December 1872 Entiat earthquake, the largest historic crustal earthquake in Washington. A fault scarp produced by the 1872 earthquake lies within the Entiat cluster; the locations and areas of both the cluster and the estimated 1872 rupture surface are comparable. Seismic intensities and the 1–2 m of coseismic displacement suggest a magnitude range between 6.5 and 7.0 for the 1872 earthquake. Aftershock forecast models for (1) the first several hours following the 1872 earthquake, (2) the largest felt earthquakes from 1900 to 1974, and (3) the seismicity within the Entiat cluster from 1976 through 2016 are also consistent with this magnitude range. Based on this aftershock modeling, most of the current seismicity in the Entiat cluster could represent aftershocks of the 1872 earthquake. Other earthquakes, especially those with long recurrence intervals, have long‐lived aftershock sequences, including the Mw">MwMw 7.5 1891 Nobi earthquake in Japan, with aftershocks continuing 100 yrs after the mainshock. Although we do not rule out ongoing tectonic deformation in this region, a long‐lived aftershock sequence can account for these observations.
Post-earthquake dilatancy recovery
NASA Technical Reports Server (NTRS)
Scholz, C. H.
1974-01-01
Geodetic measurements of the 1964 Niigata, Japan earthquake and of three other examples are briefly examined. They show exponentially decaying subsidence for a year after the quakes. The observations confirm the dilatancy-fluid diffusion model of earthquake precursors and clarify the extent and properties of the dilatant zone. An analysis using one-dimensional consolidation theory is included which agrees well with this interpretation.
Magnitude Dependent Seismic Quiescence of 2008 Wenchuan Earthquake
NASA Astrophysics Data System (ADS)
Suyehiro, K.; Sacks, S. I.; Takanami, T.; Smith, D. E.; Rydelek, P. A.
2014-12-01
The change in seismicity leading to the Wenchuan Earthquake in 2008 (Mw 7.9) has been studied by various authors based on statistics and/or pattern recognitions (Huang, 2008; Yan et al., 2009; Chen and Wang, 2010; Yi et al., 2011). We show, in particular, that the magnitude-dependent seismic quiescence is observed for the Wenchuan earthquake and that it adds to other similar observations. Such studies on seismic quiescence prior to major earthquakes include 1982 Urakawa-Oki earthquake (M 7.1) (Taylor et al., 1992), 1994 Hokkaido-Toho-Oki earthquake (Mw=8.2) (Takanami et al., 1996), 2011 Tohoku earthquake (Mw=9.0) (Katsumata, 2011). Smith and Sacks (2013) proposed a magnitude-dependent quiescence based on a physical earthquake model (Rydelek and Sacks, 1995) and demonstrated the quiescence can be reproduced by the introduction of "asperities" (dilantacy hardened zones). Actual observations indicate the change occurs in a broader area than the eventual earthquake fault zone. In order to accept the explanation, we need to verify the model as the model predicts somewhat controversial features of earthquakes such as the magnitude dependent stress drop at lower magnitude range or the dynamically appearing asperities and repeating slips in some parts of the rupture zone. We show supportive observations. We will also need to verify the dilatancy diffusion to be taking place. So far, we only seem to have indirect evidences, which need to be more quantitatively substantiated.
NASA Astrophysics Data System (ADS)
Wang, K.; Fialko, Y. A.
2016-12-01
The 2015 Mw 7.8 Gorkha (Nepal) earthquake occurred along the central Himalayan arc, a convergent boundary between India and Eurasian plates. We use space geodetic data to investigate co- and post-seismic deformation due to the Gorkha earthquake. Because the epicentral area of the earthquake is characterized by strong variations in surface relief and material properties, we developed finite element models that explicitly account for topography and 3-D elastic structure. Compared with slip models obtained using homogenous elastic half-space models, the model including elastic heterogeneity and topography exhibits greater (up to 10%) slip amplitude. GPS observations spanning more than 1 year following the earthquake show overall southward movement and uplift after the Gorkha earthquake, qualitatively similar to the coseismic deformation pattern. Kinematic inversions of GPS data, and forward modeling of stress-driven creep indicate that the observed post-seismic transient is consistent with afterslip on a down-dip extention of the seismic rupture. The Main Himalayan Thrust (MHT) has negligible creep updip of the 2015 rupture, reiterating a future seismic hazard. A poro-elastic rebound may contribute to the observed uplift southward motion, but the predicted surface displacements are small (on the order of 1 cm or less). We also tested a wide range of visco-elastic relaxation models, including 1-D and 3-D variations in the viscosity structure. All tested visco-elastic models predict the opposite signs of horizontal and vertical displacements compared to those observed. Available surface deformation data allow one to rule out a model of a low viscosity channel beneath Tibetan Plateau invoked to explain variations in surface relief at the plateau margins.
Phase response curves for models of earthquake fault dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franović, Igor, E-mail: franovic@ipb.ac.rs; Kostić, Srdjan; Perc, Matjaž
We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how themore » profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.« less
McCrory, Patricia A.; Blair, J. Luke; Oppenheimer, David H.; Walter, Stephen R.
2004-01-01
We present an updated model of the Juan de Fuca slab beneath southern British Columbia, Washington, Oregon, and northern California, and use this model to separate earthquakes occurring above and below the slab surface. The model is based on depth contours previously published by Fluck and others (1997). Our model attempts to rectify a number of shortcomings in the original model and update it with new work. The most significant improvements include (1) a gridded slab surface in geo-referenced (ArcGIS) format, (2) continuation of the slab surface to its full northern and southern edges, (3) extension of the slab surface from 50-km depth down to 110-km beneath the Cascade arc volcanoes, and (4) revision of the slab shape based on new seismic-reflection and seismic-refraction studies. We have used this surface to sort earthquakes and present some general observations and interpretations of seismicity patterns revealed by our analysis. For example, deep earthquakes within the Juan de Fuca Plate beneath western Washington define a linear trend that may mark a tear within the subducting plate Also earthquakes associated with the northern stands of the San Andreas Fault abruptly terminate at the inferred southern boundary of the Juan de Fuca slab. In addition, we provide files of earthquakes above and below the slab surface and a 3-D animation or fly-through showing a shaded-relief map with plate boundaries, the slab surface, and hypocenters for use as a visualization tool.
NASA Astrophysics Data System (ADS)
Kurtz, R.; Klinger, Y.; Ferry, M.; Ritz, J.-F.
2018-06-01
The 1957, MW 8.1, Gobi-Altai earthquake, Southern Mongolia, produced a 360-km-long surface rupture along the Eastern Bogd fault. Cumulative offsets of geomorphic features suggest that the Eastern Bogd fault might produce characteristic slip over the last seismic cycles. Using orthophotographs derived from a dataset of historical aerial photographs acquired in 1958, we measured horizontal offsets along two thirds ( 170 km) of the 1957 left-lateral strike-slip surface rupture. We propose a new empirical methodology to extract the average slip for each past earthquake that could be recognized along successive fault segments, to determine the slip distribution associated with successive past earthquakes. Our results suggest that the horizontal slip distribution of the 1957 Gobi-Altai earthquake is fairly flat, with an average offset of 3.5 m ± 1.3 m. A combination of our lateral measurements with vertical displacements derived from the literature, allows us to re-assess the magnitude of the Gobi-Altai earthquake to be between MW 7.8 and MW 8.2, depending on the depth of the rupture, and related value of the shear modulus. When comparing this magnitude to magnitudes derived from seismic data, it suggests that the rupture may have extended deeper than the 15 km to 20 km usually considered for the seismogenic crust. We observe that some fault segments are more likely than others to record seismic deformation through several seismic cycles, depending on the local rupture complexity and geomorphology. Additionally, our results allow us to model the horizontal slip function for the 1957 Gobi-Altai earthquake and for three previous paleoseismic events along 70% of the studied area. Along about 50% of the fault sections where we could recognize three past earthquakes, our results suggest that the slip per event was similar for each earthquake.
Gallen, Sean F.; Clark, Marin K.; Godt, Jonathan W.; Roback, Kevin; Niemi, Nathan A
2017-01-01
The 25 April 2015 Mw 7.8 Gorkha earthquake produced strong ground motions across an approximately 250 km by 100 km swath in central Nepal. To assist disaster response activities, we modified an existing earthquake-triggered landslide model based on a Newmark sliding block analysis to estimate the extent and intensity of landsliding and landslide dam hazard. Landslide hazard maps were produced using Shuttle Radar Topography Mission (SRTM) digital topography, peak ground acceleration (PGA) information from the U.S. Geological Survey (USGS) ShakeMap program, and assumptions about the regional rock strength based on end-member values from previous studies. The instrumental record of seismicity in Nepal is poor, so PGA estimates were based on empirical Ground Motion Prediction Equations (GMPEs) constrained by teleseismic data and felt reports. We demonstrate a non-linear dependence of modeled landsliding on aggregate rock strength, where the number of landslides decreases exponentially with increasing rock strength. Model estimates are less sensitive to PGA at steep slopes (> 60°) compared to moderate slopes (30–60°). We compare forward model results to an inventory of landslides triggered by the Gorkha earthquake. We show that moderate rock strength inputs over estimate landsliding in regions beyond the main slip patch, which may in part be related to poorly constrained PGA estimates for this event at far distances from the source area. Directly above the main slip patch, however, the moderate strength model accurately estimates the total number of landslides within the resolution of the model (landslides ≥ 0.0162 km2; observed n = 2214, modeled n = 2987), but the pattern of landsliding differs from observations. This discrepancy is likely due to the unaccounted for effects of variable material strength and local topographic amplification of strong ground motion, as well as other simplifying assumptions about source characteristics and their
NASA Astrophysics Data System (ADS)
Szeliga, Walter; Bilham, Roger; Schelling, Daniel; Kakar, Din Mohamed; Lodi, Sarosh
2009-10-01
Surface deformation associated with the 27 August 1931 earthquake near Mach in Baluchistan is quantified from spirit-leveling data and from detailed structural sections of the region interpreted from seismic reflection data constrained by numerous well logs. Mean slip on the west dipping Dezghat/Bannh fault system amounted to 1.2 m on a 42 km × 72 km thrust plane with slip locally attaining 3.2 m up dip of an inferred locking line at ˜9 km depth. Slip also occurred at depths below the interseismic locking line. In contrast, negligible slip occurred in the 4 km near the interseismic locking line. The absence of slip here in the 4 years following the earthquake suggests that elastic energy there must either dissipate slowly in the interseismic cycle, or that a slip deficit remains, pending its release in a large future earthquake. Elastic models of the earthquake cycle in this fold and thrust belt suggest that slip on the frontal thrust fault is reduced by a factor of 2 to 8 compared to that anticipated from convergence of the hinterland, a partitioning process that is presumably responsible for thickening of the fold and thrust belt at the expense of slip on the frontal thrust. Near the latitude of Quetta, GPS measurements indicate that convergence is ˜5 mm/yr. Hence the minimum renewal time between earthquakes with 1.2-m mean displacement should be as little as 240 years. However, when the partitioning of fold belt convergence to frontal thrust slip is taken into account the minimum renewal time may exceed 2000 years.
NASA Astrophysics Data System (ADS)
Yamada, T.; Nakahigashi, K.; Shinohara, M.; Mochizuki, K.; Shiobara, H.
2014-12-01
Huge earthquakes cause vastly stress field change around the rupture zones, and many aftershocks and other related geophysical phenomenon such as geodetic movements have been observed. It is important to figure out the time-spacious distribution during the relaxation process for understanding the giant earthquake cycle. In this study, we pick up the southern rupture area of the 2011 Tohoku earthquake (M9.0). The seismicity rate keeps still high compared with that before the 2011 earthquake. Many studies using ocean bottom seismometers (OBSs) have been doing since soon after the 2011 Tohoku earthquake in order to obtain aftershock activity precisely. Here we show one of the studies at off the coast of Fukushima which is located on the southern part of the rupture area caused by the 2011 Tohoku earthquake. We deployed 4 broadband type OBSs (BBOBSs) and 12 short-period type OBSs (SOBS) in August 2012. Other 4 BBOBSs attached with absolute pressure gauges and 20 SOBSs were added in November 2012. We recovered 36 OBSs including 8 BBOBSs in November 2013. We selected 1,000 events in the vicinity of the OBS network based on a hypocenter catalog published by the Japan Meteorological Agency, and extracted the data after time corrections caused by each internal clock. Each P and S wave arrival times, P wave polarity and maximum amplitude were picked manually on a computer display. We assumed one dimensional velocity structure based on the result from an active source experiment across our network, and applied time corrections every station for removing ambiguity of the assumed structure. Then we adopted a maximum-likelihood estimation technique and calculated the hypocenters. The results show that intensive activity near the Japan Trench can be seen, while there was a quiet seismic zone between the trench zone and landward high activity zone.
NASA Astrophysics Data System (ADS)
Asano, K.
2017-12-01
An MJMA 6.5 earthquake occurred offshore the Kii peninsula, southwest Japan on April 1, 2016. This event was interpreted as a thrust-event on the plate-boundary along the Nankai trough where (Wallace et al., 2016). This event is the largest plate-boundary earthquake in the source region of the 1944 Tonankai earthquake (MW 8.0) after that event. The significant point of this event regarding to seismic observation is that this event occurred beneath an ocean-bottom seismic network called DONET1, which is jointly operated by NIED and JAMSTEC. Since moderate-to-large earthquake of this focal type is very rare in this region in the last half century, it is a good opportunity to investigate the source characteristics relating to strong motion generation of subduction-zone plate-boundary earthquakes along the Nankai trough. Knowledge obtained from the study of this earthquake would contribute to ground motion prediction and seismic hazard assessment for future megathrust earthquakes expected in the Nankai trough. In this study, the source model of the 2016 offshore the Kii peninsula earthquake was estimated by broadband strong motion waveform modeling using the empirical Green's function method (Irikura, 1986). The source model is characterized by strong motion generation area (SMGA) (Miyake et al., 2003), which is defined as a rectangular area with high-stress drop or high slip-velocity. SMGA source model based on the empirical Green's function method has great potential to reproduce ground motion time history in broadband frequency range. We used strong motion data from offshore stations (DONET1 and LTBMS) and onshore stations (NIED F-net and DPRI). The records of an MJMA 3.2 aftershock at 13:04 on April 1, 2016 were selected for the empirical Green's functions. The source parameters of SMGA are optimized by the waveform modeling in the frequency range 0.4-10 Hz. The best estimate of SMGA size is 19.4 km2, and SMGA of this event does not follow the source scaling
Modified-Fibonacci-Dual-Lucas method for earthquake prediction
NASA Astrophysics Data System (ADS)
Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.
2015-06-01
The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with
The 2016 central Italy earthquake sequence: surface effects, fault model and triggering scenarios
NASA Astrophysics Data System (ADS)
Chatzipetros, Alexandros; Pavlides, Spyros; Papathanassiou, George; Sboras, Sotiris; Valkaniotis, Sotiris; Georgiadis, George
2017-04-01
The results of fieldwork performed during the 2016 earthquake sequence around the karstic basins of Norcia and La Piana di Castelluccio, at an altitude of 1400 m, on the Monte Vettore (altitude 2476 m) and Vettoretto, as well as the three mapped seismogenic faults, striking NNW-SSW, are presented in this paper. Surface co-seismic ruptures were observed in the Vettore and Vettoretto segment of the fault for several kilometres ( 7 km) in the August earthquakes at high altitudes, and were re-activated and expanded northwards during the October earthquakes. Coseismic ruptures and the neotectonic Mt. Vettore fault zone were modelled in detail using images acquired from specifically planned UAV (drone) flights. Ruptures, typically with displacement of up to 20 cm, were observed after the August event both in the scree and weathered mantle (elluvium), as well as the bedrock, consisting mainly of fragmented carbonate rocks with small tectonic surfaces. These fractures expanded and new ones formed during the October events, typically of displacements of up to 50 cm, although locally higher displacements of up to almost 2 m were observed. Hundreds of rock falls and landslides were mapped through satellite imagery, using pre- and post- earthquake Sentinel 2A images. Several of them were also verified in the field. Based on field mapping results and seismological information, the causative faults were modelled. The model consists of five seismogenic sources, each one associated with a strong event in the sequence. The visualisation of the seismogenic sources follows INGV's DISS standards for the Individual Seismogenic Sources (ISS) layer, while strike, dip and rake of the seismic sources are obtained from selected focal mechanisms. Based on this model, the ground deformation pattern was inferred, using Okada's dislocation solution formulae, which shows that the maximum calculated vertical displacement is 0.53 m. This is in good agreement with the statistical analysis of the
NASA Astrophysics Data System (ADS)
Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee
2017-04-01
It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.
NASA Astrophysics Data System (ADS)
Michel, Sylvain; Avouac, Jean-Philippe; Lapusta, Nadia; Jiang, Junle
2017-08-01
Megathrust earthquakes tend to be confined to fault areas locked in the interseismic period and often rupture them only partially. For example, during the 2015 M7.8 Gorkha earthquake, Nepal, a slip pulse propagating along strike unzipped the bottom edge of the locked portion of the Main Himalayan Thrust (MHT). The lower edge of the rupture produced dominant high-frequency (>1 Hz) radiation of seismic waves. We show that similar partial ruptures occur spontaneously in a simple dynamic model of earthquake sequences. The fault is governed by standard laboratory-based rate-and-state friction with the aging law and contains one homogenous velocity-weakening (VW) region embedded in a velocity-strengthening (VS) area. Our simulations incorporate inertial wave-mediated effects during seismic ruptures (they are thus fully dynamic) and account for all phases of the seismic cycle in a self-consistent way. Earthquakes nucleate at the edge of the VW area and partial ruptures tend to stay confined within this zone of higher prestress, producing pulse-like ruptures that propagate along strike. The amplitude of the high-frequency sources is enhanced in the zone of higher, heterogeneous stress at the edge of the VW area.
NASA Astrophysics Data System (ADS)
Michel, S. G. R. M.; Avouac, J. P.; Lapusta, N.; Jiang, J.
2017-12-01
Megathrust earthquakes tend to be confined to fault areas locked in the interseismic period and often rupture them only partially. For example, during the 2015 M7.8 Gorkha earthquake, Nepal, a slip pulse propagating along strike unzipped the bottom edge of the locked portion of the Main Himalayan Thrust (MHT). The lower edge of the rupture produced dominant high-frequency (>1 Hz) radiation of seismic waves. We show that similar partial ruptures occur spontaneously in a simple dynamic model of earthquake sequences. The fault is governed by standard laboratory-based rate-and-state friction with the ageing law and contains one homogenous velocity-weakening (VW) region embedded in a velocity-strengthening (VS) area. Our simulations incorporate inertial wave-mediated effects during seismic ruptures (they are thus fully dynamic) and account for all phases of the seismic cycle in a self-consistent way. Earthquakes nucleate at the edge of the VW area and partial ruptures tend to stay confined within this zone of higher prestress, producing pulse-like ruptures that propagate along strike. The amplitude of the high-frequency sources is enhanced in the zone of higher, heterogeneous stress at the edge of the VW area.
Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.
2016-01-01
The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro; Alexander, Nicholas A.; Kongko, Widjo; Muhari, Abdul
2017-12-01
This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0) that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan - including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal-vertical evacuation time maps - has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.
Harris, R.A.; Arrowsmith, J.R.
2006-01-01
The 28 September 2004 M 6.0 Parkfield earthquake, a long-anticipated event on the San Andreas fault, is the world's best recorded earthquake to date, with state-of-the-art data obtained from geologic, geodetic, seismic, magnetic, and electrical field networks. This has allowed the preearthquake and postearthquake states of the San Andreas fault in this region to be analyzed in detail. Analyses of these data provide views into the San Andreas fault that show a complex geologic history, fault geometry, rheology, and response of the nearby region to the earthquake-induced ground movement. Although aspects of San Andreas fault zone behavior in the Parkfield region can be modeled simply over geological time frames, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake indicate that predicting the fine details of future earthquakes is still a challenge. Instead of a deterministic approach, forecasting future damaging behavior, such as that caused by strong ground motions, will likely continue to require probabilistic methods. However, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake have provided ample data to understand most of what did occur in 2004, culminating in significant scientific advances.
Earthquake location in island arcs
Engdahl, E.R.; Dewey, J.W.; Fujita, K.
1982-01-01
A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high
NASA Astrophysics Data System (ADS)
Witter, Robert C.; Zhang, Yinglong; Wang, Kelin; Goldfinger, Chris; Priest, George R.; Allan, Jonathan C.
2012-10-01
We test hypothetical tsunami scenarios against a 4,600-year record of sandy deposits in a southern Oregon coastal lake that offer minimum inundation limits for prehistoric Cascadia tsunamis. Tsunami simulations constrain coseismic slip estimates for the southern Cascadia megathrust and contrast with slip deficits implied by earthquake recurrence intervals from turbidite paleoseismology. We model the tsunamigenic seafloor deformation using a three-dimensional elastic dislocation model and test three Cascadia earthquake rupture scenarios: slip partitioned to a splay fault; slip distributed symmetrically on the megathrust; and slip skewed seaward. Numerical tsunami simulations use the hydrodynamic finite element model, SELFE, that solves nonlinear shallow-water wave equations on unstructured grids. Our simulations of the 1700 Cascadia tsunami require >12-13 m of peak slip on the southern Cascadia megathrust offshore southern Oregon. The simulations account for tidal and shoreline variability and must crest the ˜6-m-high lake outlet to satisfy geological evidence of inundation. Accumulating this slip deficit requires ≥360-400 years at the plate convergence rate, exceeding the 330-year span of two earthquake cycles preceding 1700. Predecessors of the 1700 earthquake likely involved >8-9 m of coseismic slip accrued over >260 years. Simple slip budgets constrained by tsunami simulations allow an average of 5.2 m of slip per event for 11 additional earthquakes inferred from the southern Cascadia turbidite record. By comparison, slip deficits inferred from time intervals separating earthquake-triggered turbidites are poor predictors of coseismic slip because they meet geological constraints for only 4 out of 12 (˜33%) Cascadia tsunamis.
Study on Earthquake Emergency Evacuation Drill Trainer Development
NASA Astrophysics Data System (ADS)
ChangJiang, L.
2016-12-01
With the improvement of China's urbanization, to ensure people survive the earthquake needs scientific routine emergency evacuation drills. Drawing on cellular automaton, shortest path algorithm and collision avoidance, we designed a model of earthquake emergency evacuation drill for school scenes. Based on this model, we made simulation software for earthquake emergency evacuation drill. The software is able to perform the simulation of earthquake emergency evacuation drill by building spatial structural model and selecting the information of people's location grounds on actual conditions of constructions. Based on the data of simulation, we can operate drilling in the same building. RFID technology could be used here for drill data collection which read personal information and send it to the evacuation simulation software via WIFI. Then the simulation software would contrast simulative data with the information of actual evacuation process, such as evacuation time, evacuation path, congestion nodes and so on. In the end, it would provide a contrastive analysis report to report assessment result and optimum proposal. We hope the earthquake emergency evacuation drill software and trainer can provide overall process disposal concept for earthquake emergency evacuation drill in assembly occupancies. The trainer can make the earthquake emergency evacuation more orderly, efficient, reasonable and scientific to fulfill the increase in coping capacity of urban hazard.
NASA Astrophysics Data System (ADS)
Wedmore, L. N. J.; Faure Walker, J. P.; Roberts, G. P.; Sammonds, P. R.; McCaffrey, K. J. W.; Cowie, P. A.
2017-07-01
Current studies of fault interaction lack sufficiently long earthquake records and measurements of fault slip rates over multiple seismic cycles to fully investigate the effects of interseismic loading and coseismic stress changes on the surrounding fault network. We model elastic interactions between 97 faults from 30 earthquakes since 1349 A.D. in central Italy to investigate the relative importance of co-seismic stress changes versus interseismic stress accumulation for earthquake occurrence and fault interaction. This region has an exceptionally long, 667 year record of historical earthquakes and detailed constraints on the locations and slip rates of its active normal faults. Of 21 earthquakes since 1654, 20 events occurred on faults where combined coseismic and interseismic loading stresses were positive even though 20% of all faults are in "stress shadows" at any one time. Furthermore, the Coulomb stress on the faults that experience earthquakes is statistically different from a random sequence of earthquakes in the region. We show how coseismic Coulomb stress changes can alter earthquake interevent times by 103 years, and fault length controls the intensity of this effect. Static Coulomb stress changes cause greater interevent perturbations on shorter faults in areas characterized by lower strain (or slip) rates. The exceptional duration and number of earthquakes we model enable us to demonstrate the importance of combining long earthquake records with detailed knowledge of fault geometries, slip rates, and kinematics to understand the impact of stress changes in complex networks of active faults.
Modeling And Economics Of Extreme Subduction Earthquakes: Two Case Studies
NASA Astrophysics Data System (ADS)
Chavez, M.; Cabrera, E.; Emerson, D.; Perea, N.; Moulinec, C.
2008-05-01
The destructive effects of large magnitude, thrust subduction superficial (TSS) earthquakes on Mexico City (MC) and Guadalajara (G) has been shown in the recent centuries. For example, the 7/04/1845 and the 19/09/1985, two TSS earthquakes occurred on the coast of the state of Guerrero and Michoacan, with Ms 7+ and 8.1. The economical losses for the later were of about 7 billion US dollars. Also, the largest Ms 8.2, instrumentally observed TSS earthquake in Mexico, occurred in the Colima-Jalisco region the 3/06/1932, and the 9/10/1995 another similar, Ms 7.4 event occurred in the same region, the later produced economical losses of hundreds of thousands US dollars.The frequency of occurrence of large TSS earthquakes in Mexico is poorly known, but it might vary from decades to centuries [1]. Therefore there is a lack of strong ground motions records for extreme TSS earthquakes in Mexico, which as mentioned above, recently had an important economical impact on MC and potentially could have it in G. In this work we obtained samples of broadband synthetics [2,3] expected in MC and G, associated to extreme (plausible) magnitude Mw 8.5, TSS scenario earthquakes, with epicenters in the so-called Guerrero gap and in the Colima-Jalisco zone, respectively. The economical impacts of the proposed extreme TSS earthquake scenarios for MC and G were considered as follows: For MC by using a risk acceptability criteria, the probabilities of exceedance of the maximum seismic responses of their construction stock under the assumed scenarios, and the estimated economical losses observed for the 19/09/1985 earthquake; and for G, by estimating the expected economical losses, based on the seismic vulnerability assessment of their construction stock under the extreme seismic scenario considered. ----------------------- [1] Nishenko S.P. and Singh SK, BSSA 77, 6, 1987 [2] Cabrera E., Chavez M., Madariaga R., Mai M, Frisenda M., Perea N., AGU, Fall Meeting, 2005 [3] Chavez M., Olsen K
NASA Astrophysics Data System (ADS)
Shennan, Ian; Garrett, Ed; Barlow, Natasha
2016-10-01
Recent paleoseismological studies question whether segment boundaries identified for 20th and 21st century great, >M8, earthquakes persist through multiple earthquake cycles or whether smaller segments with different boundaries rupture and cause significant hazards. The smaller segments may include some currently slipping rather than locked. In this review, we outline general principles regarding indicators of relative sea-level change in tidal wetlands and the conditions in which paleoseismic indicators must be distinct from those resulting from non-seismic processes. We present new evidence from sites across southcentral Alaska to illustrate different detection limits of paleoseismic indicators and consider alternative interpretations for marsh submergence and emergence. We compare predictions of coseismic uplift and subsidence derived from geophysical models of earthquakes with different rupture modes. The spatial patterns of agreement and misfits between model predictions and quantitative reconstructions of coseismic submergence and emergence suggest that no earthquake within the last 4000 years had a pattern of rupture the same as the Mw 9.2 Alaska earthquake in 1964. From the Alaska examples and research from other subduction zones we suggest that If we want to understand whether a megathrust ruptures in segments of variable length in different earthquakes, we need to be site-specific as to what sort of geological-based criteria eliminate the possibility of a particular rupture mode in different earthquakes. We conclude that coastal paleoseismological studies benefit from a methodological framework that employs rigorous evaluation of five essential criteria and a sixth which may be very robust but only occur at some sites: 1 - lateral extent of peat-mud or mud-peat couplets with sharp contacts; 2 - suddenness of submergence or emergence, and replicated within each site; 3 - amount of vertical motion, quantified with 95% error terms and replicated within each
Statistical aspects and risks of human-caused earthquakes
NASA Astrophysics Data System (ADS)
Klose, C. D.
2013-12-01
The seismological community invests ample human capital and financial resources to research and predict risks associated with earthquakes. Industries such as the insurance and re-insurance sector are equally interested in using probabilistic risk models developed by the scientific community to transfer risks. These models are used to predict expected losses due to naturally occurring earthquakes. But what about the risks associated with human-caused earthquakes? Such risk models are largely absent from both industry and academic discourse. In countries around the world, informed citizens are becoming increasingly aware and concerned that this economic bias is not sustainable for long-term economic growth, environmental and human security. Ultimately, citizens look to their government officials to hold industry accountable. In the Netherlands, for example, the hydrocarbon industry is held accountable for causing earthquakes near Groningen. In Switzerland, geothermal power plants were shut down or suspended because they caused earthquakes in canton Basel and St. Gallen. The public and the private non-extractive industry needs access to information about earthquake risks in connection with sub/urban geoengineeing activities, including natural gas production through fracking, geothermal energy production, carbon sequestration, mining and water irrigation. This presentation illuminates statistical aspects of human-caused earthquakes with respect to different geologic environments. Statistical findings are based on the first catalog of human-caused earthquakes (in Klose 2013). Findings are discussed which include the odds to die during a medium-size earthquake that is set off by geomechanical pollution. Any kind of geoengineering activity causes this type of pollution and increases the likelihood of triggering nearby faults to rupture.
NASA Astrophysics Data System (ADS)
Twardzik, C.; Ji, C.
2015-12-01
It has been proposed that the mechanisms for intermediate-depth and deep earthquakes might be different. While previous extensive seismological studies suggested that such potential differences do not significantly affect the scaling relationships of earthquake parameters, there has been only a few investigations regarding their dynamic characteristics, especially for fracture energy. In this work, the 2014 Mw7.9 Rat Islands intermediate-depth (105 km) earthquake and the 2015 Mw7.8 Bonin Islands deep (680 km) earthquake are studied from two different perspectives. First, their kinematic rupture models are constrained using teleseismic body waves. Our analysis reveals that the Rat Islands earthquake breaks the entire cold core of the subducting slab defined as the depth of the 650oC isotherm. The inverted stress drop is 4 MPa, compatible to that of intra-plate earthquakes at shallow depths. On the other hand, the kinematic rupture model of the Bonin Islands earthquake, which occurred in a region lacking of seismicity for the past forty years, according to the GCMT catalog, exhibits an energetic rupture within a 35 km by 30 km slip patch and a high stress drop of 24 MPa. It is of interest to note that although complex rupture patterns are allowed to match the observations, the inverted slip distributions of these two earthquakes are simple enough to be approximated as the summation of a few circular/elliptical slip patches. Thus, we investigate subsequently their dynamic rupture models. We use a simple modelling approach in which we assume that the dynamic rupture propagation obeys a slip-weakening friction law, and we describe the distribution of stress and friction on the fault as a set of elliptical patches. We will constrain the three dynamic parameters that are yield stress, background stress prior to the rupture and slip weakening distance, as well as the shape of the elliptical patches directly from teleseismic body waves observations. The study would help us
NASA Astrophysics Data System (ADS)
Madden, E. H.; Pollard, D. D.
2009-12-01
Multi-fault, strike-slip earthquakes have proved difficult to incorporate into seismic hazard analyses due to the difficulty of determining the probability of these ruptures, despite collection of extensive data associated with such events. Modeling the mechanical behavior of these complex ruptures contributes to a better understanding of their occurrence by elucidating the relationship between surface and subsurface earthquake activity along transform faults. This insight is especially important for hazard mitigation, as multi-fault systems can produce earthquakes larger than those associated with any one fault involved. We present a linear elastic, quasi-static model of the southern portion of the 28 June 1992 Landers earthquake built in the boundary element software program Poly3D. This event did not rupture the extent of any one previously mapped fault, but trended 80km N and NW across segments of five sub-parallel, N-S and NW-SE striking faults. At M7.3, the earthquake was larger than the potential earthquakes associated with the individual faults that ruptured. The model extends from the Johnson Valley Fault, across the Landers-Kickapoo Fault, to the Homestead Valley Fault, using data associated with a six-week time period following the mainshock. It honors the complex surface deformation associated with this earthquake, which was well exposed in the desert environment and mapped extensively in the field and from aerial photos in the days immediately following the earthquake. Thus, the model incorporates the non-linearity and segmentation of the main rupture traces, the irregularity of fault slip distributions, and the associated secondary structures such as strike-slip splays and thrust faults. Interferometric Synthetic Aperture Radar (InSAR) images of the Landers event provided the first satellite images of ground deformation caused by a single seismic event and provide constraints on off-fault surface displacement in this six-week period. Insight is gained
[Modeling of carbon cycling in terrestrial ecosystem: a review].
Mao, Liuxi; Sun, Yanling; Yan, Xiaodong
2006-11-01
Terrestrial carbon cycling is one of the important issues in global change research, while carbon cycling modeling has become a necessary method and tool in understanding this cycling. This paper reviewed the research progress in terrestrial carbon cycling, with the focus on the basic framework of simulation modeling, two essential models of carbon cycling, and the classes of terrestrial carbon cycling modeling, and analyzed the present situation of terrestrial carbon cycling modeling. It was pointed out that the future research direction could be based on the biophysical modeling of dynamic vegetation, and this modeling could be an important component in the earth system modeling.
Reasenberg, Paul A.
1997-01-01
While the damaging effects of the earthquake represent a significant social setback and economic loss, the geophysical effects have produced a wealth of data that have provided important insights into the structure and mechanics of the San Andreas Fault system. Generally, the period after a large earthquake is vitally important to monitor. During this part of the seismic cycle, the primary fault and the surrounding faults, rock bodies, and crustal fluids rapidly readjust in response to the earthquake's sudden movement. Geophysical measurements made at this time can provide unique information about fundamental properties of the fault zone, including its state of stress and the geometry and frictional/rheological properties of the faults within it. Because postseismic readjustments are rapid compared with corresponding changes occurring in the preseismic period, the amount and rate of information that is available during the postseismic period is relatively high. From a geophysical viewpoint, the occurrence of the Loma Prieta earthquake in a section of the San Andreas fault zone that is surrounded by multiple and extensive geophysical monitoring networks has produced nothing less than a scientific bonanza. The reports assembled in this chapter collectively examine available geophysical observations made before and after the earthquake and model the earthquake's principal postseismic effects. The chapter covers four broad categories of postseismic effect: (1) aftershocks; (2) postseismic fault movements; (3) postseismic surface deformation; and (4) changes in electrical conductivity and crustal fluids.
NASA Astrophysics Data System (ADS)
Hayes, G. P.; Herman, M. W.; Barnhart, W. D.; Furlong, K. P.; Riquelme, S.; Benz, H.; Bergman, E.; Barrientos, S. E.; Earle, P. S.; Samsonov, S. V.
2014-12-01
The seismic gap theory, which identifies regions of elevated hazard based on a lack of recent seismicity in comparison to other portions of a fault, has successfully explained past earthquakes and is useful for qualitatively describing where future large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile, which until recently had not ruptured in a megathrust earthquake since a M~8.8 event in 1877. On April 1 2014, a M 8.2 earthquake occurred within this northern Chile seismic gap, offshore of the city of Iquique; the size and spatial extent of the rupture indicate it was not the earthquake that had been anticipated. Here, we present a rapid assessment of the seismotectonics of the March-April 2014 seismic sequence offshore northern Chile, including analyses of earthquake (fore- and aftershock) relocations, moment tensors, finite fault models, moment deficit calculations, and cumulative Coulomb stress transfer calculations over the duration of the sequence. This ensemble of information allows us to place the current sequence within the context of historic seismicity in the region, and to assess areas of remaining and/or elevated hazard. Our results indicate that while accumulated strain has been released for a portion of the northern Chile seismic gap, significant sections have not ruptured in almost 150 years. These observations suggest that large-to-great sized megathrust earthquakes will occur north and south of the 2014 Iquique sequence sooner than might be expected had the 2014 events ruptured the entire seismic gap.
An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...
NASA Astrophysics Data System (ADS)
Guo, B.
2017-12-01
Mountain watershed in Western China is prone to flash floods. The Wenchuan earthquake on May 12, 2008 led to the destruction of surface, and frequent landslides and debris flow, which further exacerbated the flash flood hazards. Two giant torrent and debris flows occurred due to heavy rainfall after the earthquake, one was on August 13 2010, and the other on August 18 2010. Flash floods reduction and risk assessment are the key issues in post-disaster reconstruction. Hydrological prediction models are important and cost-efficient mitigation tools being widely applied. In this paper, hydrological observations and simulation using remote sensing data and the WMS model are carried out in the typical flood-hit area, Longxihe watershed, Dujiangyan City, Sichuan Province, China. The hydrological response of rainfall runoff is discussed. The results show that: the WMS HEC-1 model can well simulate the runoff process of small watershed in mountainous area. This methodology can be used in other earthquake-affected areas for risk assessment and to predict the magnitude of flash floods. Key Words: Rainfall-runoff modeling. Remote Sensing. Earthquake. WMS.
NASA Technical Reports Server (NTRS)
Rundle, John B.
1988-01-01
The idea that earthquakes represent a fluctuation about the long-term motion of plates is expressed mathematically through the fluctuation hypothesis, under which all physical quantities which pertain to the occurance of earthquakes are required to depend on the difference between the present state of slip on the fault and its long-term average. It is shown that under certain circumstances the model fault dynamics undergo a sudden transition from a spatially ordered, temporally disordered state to a spatially disordered, temporally ordered state, and that the latter stages are stable for long intervals of time. For long enough faults, the dynamics are evidently chaotic. The methods developed are then used to construct a detailed model for earthquake dynamics in southern California. The result is a set of slip-time histories for all the major faults, which are similar to data obtained by geological trenching studies. Although there is an element of periodicity to the events, the patterns shift, change and evolve with time. Time scales for pattern evolution seem to be of the order of a thousand years for average recurring intervals of about a hundred years.
NASA Astrophysics Data System (ADS)
Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.
2017-12-01
Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh
2014-06-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
NASA Astrophysics Data System (ADS)
Oryan, B.; Buck, W. R.
2017-12-01
The Tohoku-oki earthquake was one of the strongest earthquakes ever recorded. 50-80 meters of lateral motion of the sloping seafloor resulted in a tsunami that exceeded predictions and caused one of the costliest natural disasters in history. It was also the first time extensional aftershocks were observed in the upper plate over a region as wide as 250km. Inspired by these findings, researchers found similar upper plate extensional earthquakes after reexamining seismic data from past earthquakes that had also produced large tsunamis. Such extensional aftershocks are difficult to explain in terms of standard subduction models. Most models assume that the dip of the subducting plate remains constant with time. However, geological evidence indicates that the dip angle of the subducting plate changes. We hypothesize that a reduction in the dip angle of the subducting plate can cause upper plate extensional earthquakes. This change in dip angle adds extensional bending stress to the upper plate. During an inter-seismic period, the interface is `locked' causing regional compression that prevents the release of extensional energy. Relief of compressional stresses during a megathrust event can trigger the release of the accumulated extensional energy, explaining why extensional earthquakes were observed after some megathrust events. Numerical models will be used to test our hypothesis. First, we will model long term subduction with a nearly constant dip angle. Then, we will impose a `mantle wind' to reduce the dip angle of the subducting plate. Eventually, we will model a full seismic cycle of the subduction resulting in a megathrust event. The generation of extensional earthquakes in the upper plate of our model following the megathrust event will allow us to determine whether a causal link exists between these earthquakes and a reduction in the dip angle of the subducting plate.
Fault Weakening due to Erosion by Fluids: A Possible Origin of Intraplate Earthquake Swarms
NASA Astrophysics Data System (ADS)
Vavrycuk, V.; Hrubcova, P.
2016-12-01
The occurrence and specific properties of earthquake swarms in geothermal areas are usually attributed to a highly fractured rock and/or heterogeneous stress within the rock mass being triggered by magmatic or hydrothermal fluid intrusion. The increase of fluid pressure destabilizes fractures and causes their opening and subsequent shear-tensile rupture. The spreading and evolution of the seismic activity is controlled by fluid flow due to diffusion in a permeable rock and/or by the redistribution of Coulomb stress. The `fluid-injection model', however, is not valid universally. We provide evidence that this model is inconsistent with observations of earthquake swarms in West Bohemia, Czech Republic. Full seismic moment tensors of micro-earthquakes in the 1997 and 2008 swarms in West Bohemia indicate that fracturing at the starting phase of the swarm was not associated with fault openings caused by pressurized fluids but rather with fault compactions. This can physically be explained by a `fluid-erosion model', when the essential role in the swarm triggering is attributed to chemical and hydrothermal fluid-rock interactions in the focal zone. Since the rock is exposed to circulating hydrothermal, CO2-saturated fluids, the walls of fractures are weakened by dissolving and altering various minerals. If fault strength lowers to a critical value, the seismicity is triggered. The fractures are compacted during failure, the fault strength recovers and a new cycle begins.
GPS Technologies as a Tool to Detect the Pre-Earthquake Signals Associated with Strong Earthquakes
NASA Astrophysics Data System (ADS)
Pulinets, S. A.; Krankowski, A.; Hernandez-Pajares, M.; Liu, J. Y. G.; Hattori, K.; Davidenko, D.; Ouzounov, D.
2015-12-01
The existence of ionospheric anomalies before earthquakes is now widely accepted. These phenomena started to be considered by GPS community to mitigate the GPS signal degradation over the territories of the earthquake preparation. The question is still open if they could be useful for seismology and for short-term earthquake forecast. More than decade of intensive studies proved that ionospheric anomalies registered before earthquakes are initiated by processes in the boundary layer of atmosphere over earthquake preparation zone and are induced in the ionosphere by electromagnetic coupling through the Global Electric Circuit. Multiparameter approach based on the Lithosphere-Atmosphere-Ionosphere Coupling model demonstrated that earthquake forecast is possible only if we consider the final stage of earthquake preparation in the multidimensional space where every dimension is one from many precursors in ensemble, and they are synergistically connected. We demonstrate approaches developed in different countries (Russia, Taiwan, Japan, Spain, and Poland) within the framework of the ISSI and ESA projects) to identify the ionospheric precursors. They are also useful to determine the all three parameters necessary for the earthquake forecast: impending earthquake epicenter position, expectation time and magnitude. These parameters are calculated using different technologies of GPS signal processing: time series, correlation, spectral analysis, ionospheric tomography, wave propagation, etc. Obtained results from different teams demonstrate the high level of statistical significance and physical justification what gives us reason to suggest these methodologies for practical validation.
About Block Dynamic Model of Earthquake Source.
NASA Astrophysics Data System (ADS)
Gusev, G. A.; Gufeld, I. L.
One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising
How fault geometry controls earthquake magnitude
NASA Astrophysics Data System (ADS)
Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.
2016-12-01
Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.
NASA Astrophysics Data System (ADS)
Vater, Stefan; Behrens, Jörn
2017-04-01
Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.
NASA Astrophysics Data System (ADS)
Carr, Brett B.; Clarke, Amanda B.; de'Michieli Vitturi, Mattia
2018-01-01
Extrusion rates during lava dome-building eruptions are variable and eruption sequences at these volcanoes generally have multiple phases. Merapi Volcano, Java, Indonesia, exemplifies this common style of activity. Merapi is one of Indonesia's most active volcanoes and during the 20th and early 21st centuries effusive activity has been characterized by long periods of very slow (<0.1 m3 s-1) extrusion rate interrupted every few years by short episodes of elevated extrusion rates (1-4 m3 s-1) lasting weeks to months. One such event occurred in May-July 2006, and previous research has identified multiple phases with different extrusion rates and styles of activity. Using input values established in the literature, we apply a 1D, isothermal, steady-state numerical model of magma ascent in a volcanic conduit to explain the variations and gain insight into corresponding conduit processes. The peak phase of the 2006 eruption occurred in the two weeks following the May 27 Mw 6.4 earthquake 50 km to the south. Previous work has suggested that the peak extrusion rates observed in early June were triggered by the earthquake through either dynamic stress-induced overpressure or the addition of CO2 due to decarbonation and gas escape from new fractures in the bedrock. We use the numerical model to test the feasibility of these proposed hypotheses and show that, in order to explain the observed change in extrusion rate, an increase of approximately 5-7 MPa in magma storage zone overpressure is required. We also find that the addition of ∼1000 ppm CO2 to some portion of the magma in the storage zone following the earthquake reduces water solubility such that gas exsolution is sufficient to generate the required overpressure. Thus, the proposed mechanism of CO2 addition is a viable explanation for the peak phase of the Merapi 2006 eruption. A time-series of extrusion rate shows a sudden increase three days following the earthquake. We explain this three-day delay by the
UCERF3: A new earthquake forecast for California's complex fault system
Field, Edward H.; ,
2015-01-01
With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.
The 2012 Mw5.6 earthquake in Sofia seismogenic zone - is it a slow earthquake
NASA Astrophysics Data System (ADS)
Raykova, Plamena; Solakov, Dimcho; Slavcheva, Krasimira; Simeonova, Stela; Aleksandrova, Irena
2017-04-01
Recently our understanding of tectonic faulting has been shaken by the discoveries of seismic tremor, low frequency earthquakes, slow slip events, and other models of fault slip. These phenomenas represent models of failure that were thought to be non-existent and theoretically impossible only a few years ago. Slow earthquakes are seismic phenomena in which the rupture of geological faults in the earth's crust occurs gradually without creating strong tremors. Despite the growing number of observations of slow earthquakes their origin remains unresolved. Studies show that the duration of slow earthquakes ranges from a few seconds to a few hundred seconds. The regular earthquakes with which most people are familiar release a burst of built-up stress in seconds, slow earthquakes release energy in ways that do little damage. This study focus on the characteristics of the Mw5.6 earthquake occurred in Sofia seismic zone on May 22nd, 2012. The Sofia area is the most populated, industrial and cultural region of Bulgaria that faces considerable earthquake risk. The Sofia seismic zone is located in South-western Bulgaria - the area with pronounce tectonic activity and proved crustal movement. In 19th century the city of Sofia (situated in the centre of the Sofia seismic zone) has experienced two strong earthquakes with epicentral intensity of 10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK64).The 2012 quake occurs in an area characterized by a long quiescence (of 95 years) for moderate events. Moreover, a reduced number of small earthquakes have also been registered in the recent past. The Mw5.6 earthquake is largely felt on the territory of Bulgaria and neighbouring countries. No casualties and severe injuries have been reported. Mostly moderate damages were observed in the cities of Pernik and Sofia and their surroundings. These observations could be assumed indicative for a
NASA Astrophysics Data System (ADS)
McCloskey, John
2008-03-01
The Sumatra-Andaman earthquake of 26 December 2004 (Boxing Day 2004) and its tsunami will endure in our memories as one of the worst natural disasters of our time. For geophysicists, the scale of the devastation and the likelihood of another equally destructive earthquake set out a series of challenges of how we might use science not only to understand the earthquake and its aftermath but also to help in planning for future earthquakes in the region. In this article a brief account of these efforts is presented. Earthquake prediction is probably impossible, but earth scientists are now able to identify particularly dangerous places for future events by developing an understanding of the physics of stress interaction. Having identified such a dangerous area, a series of numerical Monte Carlo simulations is described which allow us to get an idea of what the most likely consequences of a future earthquake are by modelling the tsunami generated by lots of possible, individually unpredictable, future events. As this article was being written, another earthquake occurred in the region, which had many expected characteristics but was enigmatic in other ways. This has spawned a series of further theories which will contribute to our understanding of this extremely complex problem.
Barberopoulou, A.; Qamar, A.; Pratt, T.L.; Steele, W.P.
2006-01-01
Analysis of strong-motion instrument recordings in Seattle, Washington, resulting from the 2002 Mw 7.9 Denali, Alaska, earthquake reveals that amplification in the 0.2-to 1.0-Hz frequency band is largely governed by the shallow sediments both inside and outside the sedimentary basins beneath the Puget Lowland. Sites above the deep sedimentary strata show additional seismic-wave amplification in the 0.04- to 0.2-Hz frequency range. Surface waves generated by the Mw 7.9 Denali, Alaska, earthquake of 3 November 2002 produced pronounced water waves across Washington state. The largest water waves coincided with the area of largest seismic-wave amplification underlain by the Seattle basin. In the current work, we present reports that show Lakes Union and Washington, both located on the Seattle basin, are susceptible to large water waves generated by large local earthquakes and teleseisms. A simple model of a water body is adopted to explain the generation of waves in water basins. This model provides reasonable estimates for the water-wave amplitudes in swimming pools during the Denali earthquake but appears to underestimate the waves observed in Lake Union.
Pre-earthquake Magnetic Pulses
NASA Astrophysics Data System (ADS)
Scoville, J.; Heraud, J. A.; Freund, F. T.
2015-12-01
A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earth quakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.
Bakun, William H.; Flores, Claudia H.; ten Brink, Uri S.
2012-01-01
Historical records indicate frequent seismic activity along the north-east Caribbean plate boundary over the past 500 years, particularly on the island of Hispaniola. We use accounts of historical earthquakes to assign intensities and the intensity assignments for the 2010 Haiti earthquakes to derive an intensity attenuation relation for Hispaniola. The intensity assignments and the attenuation relation are used in a grid search to find source locations and magnitudes that best fit the intensity assignments. Here we describe a sequence of devastating earthquakes on the Enriquillo fault system in the eighteenth century. An intensity magnitude MI 6.6 earthquake in 1701 occurred near the location of the 2010 Haiti earthquake, and the accounts of the shaking in the 1701 earthquake are similar to those of the 2010 earthquake. A series of large earthquakes migrating from east to west started with the 18 October 1751 MI 7.4–7.5 earthquake, probably located near the eastern end of the fault in the Dominican Republic, followed by the 21 November 1751 MI 6.6 earthquake near Port-au-Prince, Haiti, and the 3 June 1770 MI 7.5 earthquake west of the 2010 earthquake rupture. The 2010 Haiti earthquake may mark the beginning of a new cycle of large earthquakes on the Enriquillo fault system after 240 years of seismic quiescence. The entire Enriquillo fault system appears to be seismically active; Haiti and the Dominican Republic should prepare for future devastating earthquakes.
Limiting the effects of earthquakes on gravitational-wave interferometers
Coughlin, Michael; Earle, Paul; Harms, Jan; Biscans, Sebastien; Buchanan, Christopher; Coughlin, Eric; Donovan, Fred; Fee, Jeremy; Gabbard, Hunter; Guy, Michelle; Mukund, Nikhil; Perry, Matthew
2017-01-01
Ground-based gravitational wave interferometers such as the Laser Interferometer Gravitational-wave Observatory (LIGO) are susceptible to ground shaking from high-magnitude teleseismic events, which can interrupt their operation in science mode and significantly reduce their duty cycle. It can take several hours for a detector to stabilize enough to return to its nominal state for scientific observations. The down time can be reduced if advance warning of impending shaking is received and the impact is suppressed in the isolation system with the goal of maintaining stable operation even at the expense of increased instrumental noise. Here, we describe an early warning system for modern gravitational-wave observatories. The system relies on near real-time earthquake alerts provided by the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA). Preliminary low latency hypocenter and magnitude information is generally available in 5 to 20 min of a significant earthquake depending on its magnitude and location. The alerts are used to estimate arrival times and ground velocities at the gravitational-wave detectors. In general, 90% of the predictions for ground-motion amplitude are within a factor of 5 of measured values. The error in both arrival time and ground-motion prediction introduced by using preliminary, rather than final, hypocenter and magnitude information is minimal. By using a machine learning algorithm, we develop a prediction model that calculates the probability that a given earthquake will prevent a detector from taking data. Our initial results indicate that by using detector control configuration changes, we could prevent interruption of operation from 40 to 100 earthquake events in a 6-month time-period.
Limiting the effects of earthquakes on gravitational-wave interferometers
NASA Astrophysics Data System (ADS)
Coughlin, Michael; Earle, Paul; Harms, Jan; Biscans, Sebastien; Buchanan, Christopher; Coughlin, Eric; Donovan, Fred; Fee, Jeremy; Gabbard, Hunter; Guy, Michelle; Mukund, Nikhil; Perry, Matthew
2017-02-01
Ground-based gravitational wave interferometers such as the Laser Interferometer Gravitational-wave Observatory (LIGO) are susceptible to ground shaking from high-magnitude teleseismic events, which can interrupt their operation in science mode and significantly reduce their duty cycle. It can take several hours for a detector to stabilize enough to return to its nominal state for scientific observations. The down time can be reduced if advance warning of impending shaking is received and the impact is suppressed in the isolation system with the goal of maintaining stable operation even at the expense of increased instrumental noise. Here, we describe an early warning system for modern gravitational-wave observatories. The system relies on near real-time earthquake alerts provided by the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA). Preliminary low latency hypocenter and magnitude information is generally available in 5 to 20 min of a significant earthquake depending on its magnitude and location. The alerts are used to estimate arrival times and ground velocities at the gravitational-wave detectors. In general, 90% of the predictions for ground-motion amplitude are within a factor of 5 of measured values. The error in both arrival time and ground-motion prediction introduced by using preliminary, rather than final, hypocenter and magnitude information is minimal. By using a machine learning algorithm, we develop a prediction model that calculates the probability that a given earthquake will prevent a detector from taking data. Our initial results indicate that by using detector control configuration changes, we could prevent interruption of operation from 40 to 100 earthquake events in a 6-month time-period.
Fractal dynamics of earthquakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bak, P.; Chen, K.
1995-05-01
Many objects in nature, from mountain landscapes to electrical breakdown and turbulence, have a self-similar fractal spatial structure. It seems obvious that to understand the origin of self-similar structures, one must understand the nature of the dynamical processes that created them: temporal and spatial properties must necessarily be completely interwoven. This is particularly true for earthquakes, which have a variety of fractal aspects. The distribution of energy released during earthquakes is given by the Gutenberg-Richter power law. The distribution of epicenters appears to be fractal with dimension D {approx} 1--1.3. The number of after shocks decay as a function ofmore » time according to the Omori power law. There have been several attempts to explain the Gutenberg-Richter law by starting from a fractal distribution of faults or stresses. But this is a hen-and-egg approach: to explain the Gutenberg-Richter law, one assumes the existence of another power-law--the fractal distribution. The authors present results of a simple stick slip model of earthquakes, which evolves to a self-organized critical state. Emphasis is on demonstrating that empirical power laws for earthquakes indicate that the Earth`s crust is at the critical state, with no typical time, space, or energy scale. Of course the model is tremendously oversimplified; however in analogy with equilibrium phenomena they do not expect criticality to depend on details of the model (universality).« less
NASA Astrophysics Data System (ADS)
Glesener, G. B.; Peltzer, G.; Stubailo, I.; Cochran, E. S.; Lawrence, J. F.
2009-12-01
The Modeling and Educational Demonstrations Laboratory (MEDL) at the University of California, Los Angeles has developed a fourth version of the Elastic Rebound Strike-slip (ERS) Fault Model to be used to educate students and the general public about the process and mechanics of earthquakes from strike-slip faults. The ERS Fault Model is an interactive hands-on teaching tool which produces failure on a predefined fault embedded in an elastic medium, with adjustable normal stress. With the addition of an accelerometer sensor, called the Joy Warrior, the user can experience what it is like for a field geophysicist to collect and observe ground shaking data from an earthquake without having to experience a real earthquake. Two knobs on the ERS Fault Model control the normal and shear stress on the fault. Adjusting the normal stress knob will increase or decrease the friction on the fault. The shear stress knob displaces one side of the elastic medium parallel to the strike of the fault, resulting in changing shear stress on the fault surface. When the shear stress exceeds the threshold defined by the static friction of the fault, an earthquake on the model occurs. The accelerometer sensor then sends the data to a computer where the shaking of the model due to the sudden slip on the fault can be displayed and analyzed by the student. The experiment clearly illustrates the relationship between earthquakes and seismic waves. One of the major benefits to using the ERS Fault Model in undergraduate courses is that it helps to connect non-science students with the work of scientists. When students that are not accustomed to scientific thought are able to experience the scientific process first hand, a connection is made between the scientists and students. Connections like this might inspire a student to become a scientist, or promote the advancement of scientific research through public policy.
Crowell, Brendan; Schmidt, David; Bodin, Paul; Vidale, John; Gomberg, Joan S.; Hartog, Renate; Kress, Victor; Melbourne, Tim; Santillian, Marcelo; Minson, Sarah E.; Jamison, Dylan
2016-01-01
A prototype earthquake early warning (EEW) system is currently in development in the Pacific Northwest. We have taken a two‐stage approach to EEW: (1) detection and initial characterization using strong‐motion data with the Earthquake Alarm Systems (ElarmS) seismic early warning package and (2) the triggering of geodetic modeling modules using Global Navigation Satellite Systems data that help provide robust estimates of large‐magnitude earthquakes. In this article we demonstrate the performance of the latter, the Geodetic First Approximation of Size and Time (G‐FAST) geodetic early warning system, using simulated displacements for the 2001Mw 6.8 Nisqually earthquake. We test the timing and performance of the two G‐FAST source characterization modules, peak ground displacement scaling, and Centroid Moment Tensor‐driven finite‐fault‐slip modeling under ideal, latent, noisy, and incomplete data conditions. We show good agreement between source parameters computed by G‐FAST with previously published and postprocessed seismic and geodetic results for all test cases and modeling modules, and we discuss the challenges with integration into the U.S. Geological Survey’s ShakeAlert EEW system.
Bi-directional volcano-earthquake interaction at Mauna Loa Volcano, Hawaii
NASA Astrophysics Data System (ADS)
Walter, T. R.; Amelung, F.
2004-12-01
At Mauna Loa volcano, Hawaii, large-magnitude earthquakes occur mostly at the west flank (Kona area), at the southeast flank (Hilea area), and at the east flank (Kaoiki area). Eruptions at Mauna Loa occur mostly at the summit region and along fissures at the southwest rift zone (SWRZ), or at the northeast rift zone (NERZ). Although historic earthquakes and eruptions at these zones appear to correlate in space and time, the mechanisms and implications of an eruption-earthquake interaction was not cleared. Our analysis of available factual data reveals the highly statistical significance of eruption-earthquake pairs, with a random probability of 5-to-15 percent. We clarify this correlation with the help of elastic stress-field models, where (i) we simulate earthquakes and calculate the resulting normal stress change at volcanic active zones of Mauna Loa, and (ii) we simulate intrusions in Mauna Loa and calculate the Coulomb stress change at the active fault zones. Our models suggest that Hilea earthquakes encourage dike intrusion in the SWRZ, Kona earthquakes encourage dike intrusion at the summit and in the SWRZ, and Kaoiki earthquakes encourage dike intrusion in the NERZ. Moreover, a dike in the SWRZ encourages earthquakes in the Hilea and Kona areas. A dike in the NERZ may encourage and discourage earthquakes in the Hilea and Kaoiki areas. The modeled stress change patterns coincide remarkably with the patterns of several historic eruption-earthquake pairs, clarifying the mechanisms of bi-directional volcano-earthquake interaction for Mauna Loa. The results imply that at Mauna Loa volcanic activity influences the timing and location of earthquakes, and that earthquakes influence the timing, location and the volume of eruptions. In combination with near real-time geodetic and seismic monitoring, these findings may improve volcano-tectonic risk assessment.
Modelling the Time Dependence of Frequency Content of Long-period Volcanic Earthquakes
NASA Astrophysics Data System (ADS)
Jousset, P.; Neuberg, J. W.
2001-12-01
Broad-band seismic networks provide a powerfull tool for the observation and analysis of volcanic earthquakes. The amplitude spectrogram allows us to follow the frequency content of these signals with time. Observed amplitude spectrograms of long-period volcanic earthquakes display distinct spectral lines sometimes varying by several Hertz over time spans of minutes to hours. We first present several examples associated with various phases of volcanic activity at Soufrière Hills volcano, Montserrat. Then, we present and discuss two mechanisms to explain such frequency changes in the spectrograms: (i) change of physical properties within the magma and, (ii) change in the triggering frequency of repeated sources within the conduit. We use 2D and 3D finite-difference modelling methods to compute the propagation of seismic waves in simplified volcanic structures: (i) we model the gliding spectral lines by introducing continuously changing magma properties during the wavefield computation; (ii) we explore the resulting pressure distribution within the conduit and its potential role in triggering further events. We obtain constraints on both amplitude and time-scales for changes of magma properties that are required to model gliding lines in amplitude spectrograms.
Tsunami Modeling to Validate Slip Models of the 2007 M w 8.0 Pisco Earthquake, Central Peru
NASA Astrophysics Data System (ADS)
Ioualalen, M.; Perfettini, H.; Condo, S. Yauri; Jimenez, C.; Tavera, H.
2013-03-01
Following the 2007, August 15th, M w 8.0, Pisco earthquake in central Peru, Sladen et al. (J Geophys Res 115: B02405, 2010) have derived several slip models of this event. They inverted teleseismic data together with geodetic (InSAR) measurements to look for the co-seismic slip distribution on the fault plane, considering those data sets separately or jointly. But how close to the real slip distribution are those inverted slip models? To answer this crucial question, the authors generated some tsunami records based on their slip models and compared them to DART buoys, tsunami records, and available runup data. Such an approach requires a robust and accurate tsunami model (non-linear, dispersive, accurate bathymetry and topography, etc.) otherwise the differences between the data and the model may be attributed to the slip models themselves, though they arise from an incomplete tsunami simulation. The accuracy of a numerical tsunami simulation strongly depends, among others, on two important constraints: (i) A fine computational grid (and thus the bathymetry and topography data sets used) which is not always available, unfortunately, and (ii) a realistic tsunami propagation model including dispersion. Here, we extend Sladen's work using newly available data, namely a tide gauge record at Callao (Lima harbor) and the Chilean DART buoy record, while considering a complete set of runup data along with a more realistic tsunami numerical that accounts for dispersion, and also considering a fine-resolution computational grid, which is essential. Through these accurate numerical simulations we infer that the InSAR-based model is in better agreement with the tsunami data, studying the case of the Pisco earthquake indicating that geodetic data seems essential to recover the final co-seismic slip distribution on the rupture plane. Slip models based on teleseismic data are unable to describe the observed tsunami, suggesting that a significant amount of co-seismic slip may have
NASA Astrophysics Data System (ADS)
Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.
2008-12-01
Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
Recent Achievements of the Collaboratory for the Study of Earthquake Predictability
NASA Astrophysics Data System (ADS)
Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.
2016-12-01
The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as
NASA Astrophysics Data System (ADS)
Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.
2017-12-01
Much recent media attention focuses on Cascadia's earthquake hazard. A widely cited magazine article starts "An earthquake will destroy a sizable portion of the coastal Northwest. The question is when." Stories include statements like "a massive earthquake is overdue", "in the next 50 years, there is a 1-in-10 chance a "really big one" will erupt," or "the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three." These lead students to ask where the quoted probabilities come from and what they mean. These probability estimates involve two primary choices: what data are used to describe when past earthquakes happened and what models are used to forecast when future earthquakes will happen. The data come from a 10,000-year record of large paleoearthquakes compiled from subsidence data on land and turbidites, offshore deposits recording submarine slope failure. Earthquakes seem to have happened in clusters of four or five events, separated by gaps. Earthquakes within a cluster occur more frequently and regularly than in the full record. Hence the next earthquake is more likely if we assume that we are in the recent cluster that started about 1700 years ago, than if we assume the cluster is over. Students can explore how changing assumptions drastically changes probability estimates using easy-to-write and display spreadsheets, like those shown below. Insight can also come from baseball analogies. The cluster issue is like deciding whether to assume that a hitter's performance in the next game is better described by his lifetime record, or by the past few games, since he may be hitting unusually well or in a slump. The other big choice is whether to assume that the probability of an earthquake is constant with time, or is small immediately after one occurs and then grows with time. This is like whether to assume that a player's performance is the same from year to year, or changes over their career. Thus saying "the chance of
A comparison study of 2006 Java earthquake and other Tsunami earthquakes
NASA Astrophysics Data System (ADS)
Ji, C.; Shao, G.
2006-12-01
We revise the slip processes of July 17 2006 Java earthquakes by combined inverting teleseismic body wave, long period surface waves, as well as the broadband records at Christmas island (XMIS), which is 220 km away from the hypocenter and so far the closest observation for a Tsunami earthquake. Comparing with the previous studies, our approach considers the amplitude variations of surface waves with source depths as well as the contribution of ScS phase, which usually has amplitudes compatible with that of direct S phase for such low angle thrust earthquakes. The fault dip angles are also refined using the Love waves observed along fault strike direction. Our results indicate that the 2006 event initiated at a depth around 12 km and unilaterally rupture southeast for 150 sec with a speed of 1.0 km/sec. The revised fault dip is only about 6 degrees, smaller than the Harvard CMT (10.5 degrees) but consistent with that of 1994 Java earthquake. The smaller fault dip results in a larger moment magnitude (Mw=7.9) for a PREM earth, though it is dependent on the velocity structure used. After verified with 3D SEM forward simulation, we compare the inverted result with the revised slip models of 1994 Java and 1992 Nicaragua earthquakes derived using the same wavelet based finite fault inversion methodology.
Anomalies of rupture velocity in deep earthquakes
NASA Astrophysics Data System (ADS)
Suzuki, M.; Yagi, Y.
2010-12-01
Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth
Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor
2016-01-01
A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities' preparedness and response capabilities and to mitigate future consequences. An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model's algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties.
Earthquake Damping Device for Steel Frame
NASA Astrophysics Data System (ADS)
Zamri Ramli, Mohd; Delfy, Dezoura; Adnan, Azlan; Torman, Zaida
2018-04-01
Structures such as buildings, bridges and towers are prone to collapse when natural phenomena like earthquake occurred. Therefore, many design codes are reviewed and new technologies are introduced to resist earthquake energy especially on building to avoid collapse. The tuned mass damper is one of the earthquake reduction products introduced on structures to minimise the earthquake effect. This study aims to analyse the effectiveness of tuned mass damper by experimental works and finite element modelling. The comparisons are made between these two models under harmonic excitation. Based on the result, it is proven that installing tuned mass damper will reduce the dynamic response of the frame but only in several input frequencies. At the highest input frequency applied, the tuned mass damper failed to reduce the responses. In conclusion, in order to use a proper design of damper, detailed analysis must be carried out to have sufficient design based on the location of the structures with specific ground accelerations.
NASA Astrophysics Data System (ADS)
McHugh, C. M.; Seeber, L.; Moernaut, J.; Strasser, M.; Kanamatsu, T.; Ikehara, K.; Bopp, R.; Mustaque, S.; Usami, K.; Schwestermann, T.; Kioka, A.; Moore, L. M.
2017-12-01
The 2004 Sumatra-Andaman Mw9.3 and the 2011 Tohoku (Japan) Mw9.0 earthquakes and tsunamis were huge geological events with major societal consequences. Both were along subduction boundaries and ruptured portions of these boundaries that had been deemed incapable of such events. Submarine strike-slip earthquakes, such as the 2010 Mw7.0 in Haiti, are smaller but may be closer to population centers and can be similarly catastrophic. Both classes of earthquakes remobilize sediment and leave distinct signatures in the geologic record by a wide range of processes that depends on both environment and earthquake characteristics. Understanding them has the potential of greatly expanding the record of past earthquakes, which is critical for geohazard analysis. Recent events offer precious ground truth about the earthquakes and short-lived radioisotopes offer invaluable tools to identify sediments they remobilized. In the 2011 Mw9 Japan earthquake they document the spatial extent of remobilized sediment from water depths of 626m in the forearc slope to trench depths of 8000m. Subbottom profiles, multibeam bathymetry and 40 piston cores collected by the R/V Natsushima and R/V Sonne expeditions to the Japan Trench document multiple turbidites and high-density flows. Core tops enriched in xs210Pb,137Cs and 134Cs reveal sediment deposited by the 2011 Tohoku earthquake and tsunami. The thickest deposits (2m) were documented on a mid-slope terrace and trench (4000-8000m). Sediment was deposited on some terraces (600-3000m), but shed from the steep forearc slope (3000-4000m). The 2010 Haiti mainshock ruptured along the southern flank of Canal du Sud and triggered multiple nearshore sediment failures, generated turbidity currents and stirred fine sediment into suspension throughout this basin. A tsunami was modeled to stem from both sediment failures and tectonics. Remobilized sediment was tracked with short-lived radioisotopes from the nearshore, slope, in fault basins including the
NASA Astrophysics Data System (ADS)
Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser
2018-03-01
The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.
NASA Astrophysics Data System (ADS)
Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser
2018-04-01
The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.
Statistical distributions of earthquake numbers: consequence of branching process
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.
2010-03-01
We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.
NASA Astrophysics Data System (ADS)
Zha, X.; Dai, Z.; Lu, Z.
2015-12-01
The 2011 Hawthorne earthquake swarm occurred in the central Walker Lane zone, neighboring the border between California and Nevada. The swarm included an Mw 4.4 on April 13, Mw 4.6 on April 17, and Mw 3.9 on April 27. Due to the lack of the near-field seismic instrument, it is difficult to get the accurate source information from the seismic data for these moderate-magnitude events. ENVISAT InSAR observations captured the deformation mainly caused by three events during the 2011 Hawthorne earthquake swarm. The surface traces of three seismogenic sources could be identified according to the local topography and interferogram phase discontinuities. The epicenters could be determined using the interferograms and the relocated earthquake distribution. An apparent earthquake migration is revealed by InSAR observations and the earthquake distribution. Analysis and modeling of InSAR data show that three moderate magnitude earthquakes were produced by slip on three previously unrecognized faults in the central Walker Lane. Two seismogenic sources are northwest striking, right-lateral strike-slip faults with some thrust-slip components, and the other source is a northeast striking, thrust-slip fault with some strike-slip components. The former two faults are roughly parallel to each other, and almost perpendicular to the latter one. This special spatial correlation between three seismogenic faults and nature of seismogenic faults suggest the central Walker Lane has been undergoing southeast-northwest horizontal compressive deformation, consistent with the region crustal movement revealed by GPS measurement. The Coulomb failure stresses on the fault planes were calculated using the preferred slip model and the Coulomb 3.4 software package. For the Mw4.6 earthquake, the Coulomb stress change caused by the Mw4.4 event increased by ~0.1 bar. For the Mw3.9 event, the Coulomb stress change caused by the Mw4.6 earthquake increased by ~1.0 bar. This indicates that the preceding
Geophysical Anomalies and Earthquake Prediction
NASA Astrophysics Data System (ADS)
Jackson, D. D.
2008-12-01
Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require
Regional and Local Glacial-Earthquake Patterns in Greenland
NASA Astrophysics Data System (ADS)
Olsen, K.; Nettles, M.
2016-12-01
Icebergs calved from marine-terminating glaciers currently account for up to half of the 400 Gt of ice lost annually from the Greenland ice sheet (Enderlin et al., 2014). When large capsizing icebergs ( 1 Gt of ice) calve, they produce elastic waves that propagate through the solid earth and are observed as teleseismically detectable MSW 5 glacial earthquakes (e.g., Ekström et al., 2003; Nettles & Ekström, 2010 Tsai & Ekström, 2007; Veitch & Nettles, 2012). The annual number of these events has increased dramatically over the past two decades. We analyze glacial earthquakes from 2011-2013, which expands the glacial-earthquake catalog by 50%. The number of glacial-earthquake solutions now available allows us to investigate regional patterns across Greenland and link earthquake characteristics to changes in ice dynamics at individual glaciers. During the years of our study Greenland's west coast dominated glacial-earthquake production. Kong Oscar Glacier, Upernavik Isstrøm, and Jakobshavn Isbræ all produced more glacial earthquakes during this time than in preceding years. We link patterns in glacial-earthquake production and cessation to the presence or absence of floating ice tongues at glaciers on both coasts of Greenland. The calving model predicts glacial-earthquake force azimuths oriented perpendicular to the calving front, and comparisons between seismic data and satellite imagery confirm this in most instances. At two glaciers we document force azimuths that have recently changed orientation and confirm that similar changes have occurred in the calving-front geometry. We also document glacial earthquakes at one previously quiescent glacier. Consistent with previous work, we model the glacial-earthquake force-time function as a boxcar with horizontal and vertical force components that vary synchronously. We investigate limitations of this approach and explore improvements that could lead to a more accurate representation of the glacial earthquake source.
NASA Astrophysics Data System (ADS)
Ren, Jeffrey S.; Barr, Neill G.; Scheuer, Kristin; Schiel, David R.; Zeldis, John
2014-07-01
A dynamic growth model of macroalgae was developed to predict growth of the green macroalga Ulva sp. in response to changes in environmental variables. The model is based on common physiological behaviour of macroalgae and hence has general applicability to macroalgae. Three state variables (nitrogen, carbon and phosphorus) were used to describe physiological processes and functional differences between nutrient and carbon uptakes. Carbon uptake is modelled as a function of temperature, light, algal internal state and water current, while nutrient uptake depends on internal state, temperature and environmental nutrient level. Growth can only occur when nutrients in the environment and in the internal storage pools (N-quota and P-quota) reach threshold levels. Physiological rates follow the Arrhenius relationship and increase exponentially with increasing temperature within the temperature tolerance range of a species. When parameterised and applied to Ulva sp. in the eutrophic Avon-Heathcote Estuary, New Zealand, the model generally reproduced field observations of Ulva sp. growth and abundance. Growth followed a clear seasonal cycle with biomass increasing from early-middle summer, reaching peak values in early autumn and then decreasing. Conversely, N-quotient levels were maximal during the winter months, declining during summer peak growth. These seasonal patterns were collectively driven by temperature, light intensity and nutrients. The model captured the N-quota and growth responses of Ulva sp. to the N-reduction arising from diversion of treated wastewater from the Avon-Heathcote Estuary to an offshore outfall in 2010, and of raw sewage N-discharges resulting from wastewater infrastructure damage caused by the Canterbury earthquakes in 2011. Sensitivity analyses revealed that temperature-related parameters and maximum uptake rate of C were among the most sensitive parameters in predicting biomass. In addition, the earthquake-derived changes in reduction of
Seismotectonic Models of the Three Recent Devastating SCR Earthquakes in India
NASA Astrophysics Data System (ADS)
Mooney, W. D.; Kayal, J.
2007-12-01
During the last decade, three devastating earthquakes, the Killari 1993 (Mb 6.3), Jabalpur 1997 (Mb 6.0) and the Bhuj 2001 (Mw 7.7) occurred in the Stable Continental Region (SCR), Peninsular India. First, the September 30, 1993 Killari earthquake (Mb 6.3) occurred in the Deccan province of central India, in the Latur district of Maharashtra state. The local geology in the area is obscured by the late Cretaceous-Eocene basalt flows, referred to as the Deccan traps. This makes it difficult to recognize the geological surface faults that could be associated with the Killari earthquake. The epicentre was reported at 18.090N and 76.620E, and the focal depth at 7 +/- 1 km was precisely estimated by waveform inversion (Chen and Kao, 1995). The maximum intensity reached to VIII and the earthquake caused a loss of about 10,000 lives and severe damage to property. The May 22, 1997 Jabalpur earthquake (Mb 6.0), epicentre at 23.080N and 80.060E, is a well studied earthquake in the Son-Narmada-Tapti (SONATA) seismic zone. A notable aspects of this earthquake is that it was the first significant event in India to be recorded by 10 broadband seismic stations which were established in 1996 by the India Meteorological Department (IMD). The focal depth was well estimated using the "converted phases" of the broadband seismograms. The focal depth was given in the lower crust at a depth of 35 +/- 1 km, similar to the moderate earthquakes reported from the Amazona ancient rift system in SCR of South America. Maximum MSK intensity of the Jabalpur earthquake reached to VIII in the MSK scale and this earthquake killed about 50 people in the Jabalpur area. Finally, the Bhuj earthquake (MW 7.7) of January 26, 2001 in the Gujarat state, northwestern India, was felt across the whole country, and killed about 20,000 people. The maximum intensity level reached X. The epicenter of the earthquake is reported at 23.400N and 70.280E, and the well estimated focal depth at 25 km. A total of about
NASA Astrophysics Data System (ADS)
Newman, A. V.; Kyriakopoulos, C.
2015-12-01
Unlike most subduction environments that exist mostly or entirely offshore, the Nicoya Peninsula's location allows for unique land-based observations of the entire down-dip extent of coupling and failure along the seismogenic megathrust. Because of this geometry and approximately 50-year repeat cycle of mid-magnitude 7 earthquakes there, numerous geophysical studies were focused on the peninsula. Most notably of these are the dense seismic and GPS networks cooperatively operated by UC Santa Cruz, Georgia Tech, U. South Florida, and OVSICORI, collectively called the Nicoya Seismic Cycle Observatory (NSCO). The megathrust environment beneath Nicoya is additionally characterized by strong along-strike transitions in oceanic crust origin and geometries, including massive subducted seamounts, and a substantial crustal suture well documented in recent work by Kyriakopoulos et al. [JGR, 2015]. Using GPS data collected from campaign and continuous sites going back approximately 20 years, a number of studies have imaged components of the seismic cycle, including late-interseismic coupling, frequent slow-slip events, coseismic rupture of a moment magnitude 7.6 earthquake in 2012, and early postseismic response. The derived images of interface locking and slip behavior published for each of these episodes use different model geometries, different weighting schemes, and modeling algorithms limiting their use for fully characterizing the transitions between zones. Here, we report the first unified analysis of the full continuum of slip using the new locally defined 3D plate interface model. We focus on evaluating how transitions in plate geometry control observed locking, slip, and quantifying how well pre-seismic images of megathrust locking and slow-slip events dictate coseismic and postseismic behavior. Without the long-term and continuous geodetic observations made by the NSCO, this work would not have been possible.
Stress Regime in the Nepalese Himalaya from Recent Earthquakes.
NASA Astrophysics Data System (ADS)
Pant, M.; Karplus, M. S.; Velasco, A. A.; Nabelek, J.; Kuna, V. M.; Ghosh, A.; Mendoza, M.; Adhikari, L. B.; Sapkota, S. N.; Klemperer, S. L.; Patlan, E.
2017-12-01
The two recent earthquakes, April 25, 2015 Mw 7.8 (Gorkha earthquake) and May 12, 2015 Mw 7.2, at the Indo-Eurasian plate margin killed thousands of people and caused billion dollars of property loss. In response to these events, we deployed a dense array of seismometers to record the aftershocks along Gorkha earthquake rupture area. Our network NAMASTE (Nepal Array Measuring Aftershock Seismicity Trailing Earthquake) included 45 different seismic stations (16 short period, 25 broadband, and 4 strong motion sensors) covering a large area from north-central Nepal to south of the Main Frontal Thrust at a spacing of 20 km. The instruments recorded aftershocks from June 2015 to May 2016. We used time domain short term average (STA) and long term average (LTA) algorithms (1/10s and 4/40s) respectively to detect the arrivals and then developed an earthquake catalog containing 9300 aftershocks. We are manually picking the P-wave first motion arrival polarity to develop a catalog of focal mechanisms for the larger magnitude (>M3.0) events with adequate (>10) arrivals. We hope to characterize the seismicity and stress mechanisms of the complex fault geometries in the Nepalese Himalaya and to address the geophysical processes controlling seismic cycles in the Indo-Eurasian plate margin.
The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)
,
2008-01-01
California?s 35 million people live among some of the most active earthquake faults in the United States. Public safety demands credible assessments of the earthquake hazard to maintain appropriate building codes for safe construction and earthquake insurance for loss protection. Seismic hazard analysis begins with an earthquake rupture forecast?a model of probabilities that earthquakes of specified magnitudes, locations, and faulting types will occur during a specified time interval. This report describes a new earthquake rupture forecast for California developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP 2007).
NASA Astrophysics Data System (ADS)
Moore, J. D. P.; Barbot, S.; Peng, D.; Yu, H.; Qiu, Q.; Wang, T.; Masuti, S.; Dauwels, J.; Lindsey, E. O.; Tang, C. H.; Feng, L.; Wei, S.; Hsu, Y. J.; Nanjundiah, P.; Lambert, V.; Antoine, S.
2017-12-01
Studies of geodetic data across the earthquake cycle indicate that a wide range of mechanisms contribute to cycles of stress buildup and relaxation. Both on-fault rate and state friction and off-fault rheologies can contribute to the observed deformation; in particular, during the postseismic transient phase of the earthquake cycle. We present a novel approach to simulate on-fault and off-fault deformation simultaneously using analytical Green's functions for distributed deformation at depth [Barbot, Moore and Lambert., 2017] and surface tractions, within an integral framework [Lambert & Barbot, 2016]. This allows us to jointly explore dynamic frictional properties on the fault, and the plastic properties of the bulk rocks (including grain size and water distribution) in the lower crust with low computational cost, whilst taking into account contributions from topography and a surface approximation for gravity. These displacement and stress Green's functions can be used for both forward and inverse modelling of distributed shear, where the calculated strain-rates can be converted to effective viscosities. Here, we draw insight from the postseismic geodetic observations following the 2015 Mw 7.8 Gorkha earthquake. We forward model afterslip using rate and state friction on the megathrust geometry with the two ramp-décollement system presented by Hubbard et al., (2016) and viscoelastic relaxation using recent experimentally derived flow laws with transient rheology and the thermal structure from Cattin et al. (2001). Multivariate posterior probability density functions for model parameters are generated by incorporating the forward model evaluation and comparison to geodetic observations into a Gaussian copula framework. In particular, we find that no afterslip occurred on the up-dip portion of the fault beneath Kathmandu. A combination of viscoelastic relaxation and down-dip afterslip is required to fit the data, taking into account the bi-directional coupling
Failure of self-similarity for large (Mw > 81/4) earthquakes.
Hartzell, S.H.; Heaton, T.H.
1988-01-01
Compares teleseismic P-wave records for earthquakes in the magnitude range from 6.0-9.5 with synthetics for a self-similar, omega 2 source model and conclude that the energy radiated by very large earthquakes (Mw > 81/4) is not self-similar to that radiated from smaller earthquakes (Mw < 81/4). Furthermore, in the period band from 2 sec to several tens of seconds, it is concluded that large subduction earthquakes have an average spectral decay rate of omega -1.5. This spectral decay rate is consistent with a previously noted tendency of the omega 2 model to overestimate Ms for large earthquakes.-Authors
A dynamic model for slab development associated with the 2015 Mw 7.9 Bonin Islands deep earthquak
NASA Astrophysics Data System (ADS)
Zhan, Z.; Yang, T.; Gurnis, M.
2016-12-01
The 680 km deep May 30, 2015 Mw 7.9 Bonin Islands earthquake is isolated from the nearest earthquakes by more than 150 km. The geodynamic context leading to this isolated deep event is unclear. Tomographic models and seismicity indicate that the morphology of the west-dipping Pacific slab changes rapidly along the strike of the Izu-Bonin-Mariana trench. To the north, the Izu-Bonin section of the Pacific slab lies horizontally above the 660 km discontinuity and extends more than 500 km westward. Several degrees south, the Mariana section dips vertically and penetrates directly into the lower mantle. The observed slab morphology is consistent with plate reconstructions suggesting that the northern section of the IBM trench retreated rapidly since the late Eocene while the southern section of the IBM trench was relatively stable during the same period. We suggest that the location of the isolated 2015 Bonin Islands deep earthquake can be explained by the buckling of the Pacific slab beneath the Bonin Islands. We use geodynamic models to investigate the slab morphology, temperature and stress regimes under different trench motion histories. Models confirm previous results that the slab often lies horizontally within the transition zone when the trench retreats, but buckles when the trench position becomes fixed with respect to the lower mantle. We show that a slab-buckling model is consistent with the observed deep earthquake P-axis directions (assumed to be the axis of principal compressional stress) regionally. The influences of various physical parameters on slab morphology, temperature and stress regime are investigated. In the models investigated, the horizontal width of the buckled slab is no more than 400 km.
Geological and historical evidence of irregular recurrent earthquakes in Japan.
Satake, Kenji
2015-10-28
Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).
The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence
Coordinated by Bakun, William H.; Prescott, William H.
1993-01-01
Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.
Earthquake Source Inversion Blindtest: Initial Results and Further Developments
NASA Astrophysics Data System (ADS)
Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.
2007-12-01
Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and
NASA Astrophysics Data System (ADS)
Yang, J.; Yi, S.; Sun, W.
2016-12-01
Signification displacements caused by the 2011 Tohoku-Oki earthquake (Mw9.0) can be detected by GPS observations on the north and northeast of Asian continent which comes from Crustal Movement Observation Network of China (CMONOC). Obviously horizontal displacement which can be detected with many GPS stations reaches to almost 3cm and 2cm and most of those extend eastward pointing to the epicenter of this earthquake. Those data can be acquired rapidly after the earthquake from CMONOC. Here, we will discuss how to calculate the seismic moment with those far-field GPS observations. The far field displacement can constrain the pattern of finite slip model and seismic moment using spherically stratified Earth model (PREM). We give a general rule of thumb to show how far-field GPS observations are affected by the earthquake parameters. In the worldwide, after 1990 there are 27 large earthquakes (the magnitude more than Mw 8.0) which most are subduction types with low rake angle. Their far-field GPS observations are mainly controlled by the component of Y22. Far-field GPS observations are potential to constrain one or two components of the focal mechanisms. When we joint far-field and near-field GPS data to get the 2011 Tohoku-Oki earthquake, we can get a more accurately finite slip model. The article shows a new mothed using far-field GPS data to constrain the fault slip model.
NASA Astrophysics Data System (ADS)
Ambroglini, Filippo; Jerome Burger, William; Battiston, Roberto; Vitale, Vincenzo; Zhang, Yu
2014-05-01
During last decades, few space experiments revealed anomalous bursts of charged particles, mainly electrons with energy larger than few MeV. A possible source of these bursts are the low-frequency seismo-electromagnetic emissions, which can cause the precipitation of the electrons from the lower boundary of their inner belt. Studies of these bursts reported also a short-term pre-seismic excess. Starting from simulation tools traditionally used on high energy physics we developed a dedicated application SEPS (Space Perturbation Earthquake Simulation), based on the Geant4 tool and PLANETOCOSMICS program, able to model and simulate the electromagnetic interaction between the earthquake and the particles trapped in the inner Van Allen belt. With SEPS one can study the transport of particles trapped in the Van Allen belts through the Earth's magnetic field also taking into account possible interactions with the Earth's atmosphere. SEPS provides the possibility of: testing different models of interaction between electromagnetic waves and trapped particles, defining the mechanism of interaction as also shaping the area in which this takes place,assessing the effects of perturbations in the magnetic field on the particles path, performing back-tracking analysis and also modelling the interaction with electric fields. SEPS is in advanced development stage, so that it could be already exploited to test in details the results of correlation analysis between particle bursts and earthquakes based on NOAA and SAMPEX data. The test was performed both with a full simulation analysis, (tracing from the position of the earthquake and going to see if there were paths compatible with the burst revealed) and with a back-tracking analysis (tracing from the burst detection point and checking the compatibility with the position of associated earthquake).
Earthquake-driven erosion of organic carbon at the eastern margin of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Li, G.; West, A. J.; Hara, E. K.; Hammond, D. E.; Hilton, R. G.
2016-12-01
Large earthquakes can trigger massive landsliding that erodes particulate organic carbon (POC) from vegetation, soil and bedrocks, potentially linking seismotectonics to the global carbon cycle. Recent work (Wang et al., 2016, Geology) has highlighted a dramatic increase in riverine export of biospheric POC following the 2008 Mw7.9 Wenchuan earthquake, in the steep Longmen Shan mountain range at the eastern margin of the Tibetan Plateau. However, a complete, source-to-sink picture of POC erosion after the earthquake is still missing. Here we track POC transfer across the Longmen Shan range from high mountains to the downstream Zipingpu reservoir where riverine-exported POC has been trapped. Building on the work of Wang et al. (2016), who measured the compositions and fluxes of riverine POC, this study is focused on constraining the source and fate of the eroded POC after the earthquake. We have sampled landslide deposits and river sediment, and we have cored the Zipingpu reservoir, following a source-to-sink sampling strategy. We measured POC compositions and grain size of the sediment samples, mapped landslide-mobilized POC using maps of landslide inventory and biomass, and tracked POC loading from landslides to the reservoir sediment to constrain the fate of eroded OC. Constraints on carbon sources, fluxes and fate provide the foundation for constructing a post-earthquake POC budget. This work highlights the role of earthquakes in the mobilization and burial of POC, providing new insight into mechanisms linking tectonics and the carbon cycle and building understanding needed to interpret past seismicity from sedimentary archives.
Non-linear resonant coupling of tsunami edge waves using stochastic earthquake source models
Geist, Eric L.
2016-01-01
Non-linear resonant coupling of edge waves can occur with tsunamis generated by large-magnitude subduction zone earthquakes. Earthquake rupture zones that straddle beneath the coastline of continental margins are particularly efficient at generating tsunami edge waves. Using a stochastic model for earthquake slip, it is shown that a wide range of edge-wave modes and wavenumbers can be excited, depending on the variability of slip. If two modes are present that satisfy resonance conditions, then a third mode can gradually increase in amplitude over time, even if the earthquake did not originally excite that edge-wave mode. These three edge waves form a resonant triad that can cause unexpected variations in tsunami amplitude long after the first arrival. An M ∼ 9, 1100 km-long continental subduction zone earthquake is considered as a test case. For the least-variable slip examined involving a Gaussian random variable, the dominant resonant triad includes a high-amplitude fundamental mode wave with wavenumber associated with the along-strike dimension of rupture. The two other waves that make up this triad include subharmonic waves, one of fundamental mode and the other of mode 2 or 3. For the most variable slip examined involving a Cauchy-distributed random variable, the dominant triads involve higher wavenumbers and modes because subevents, rather than the overall rupture dimension, control the excitation of edge waves. Calculation of the resonant period for energy transfer determines which cases resonant coupling may be instrumentally observed. For low-mode triads, the maximum transfer of energy occurs approximately 20–30 wave periods after the first arrival and thus may be observed prior to the tsunami coda being completely attenuated. Therefore, under certain circumstances the necessary ingredients for resonant coupling of tsunami edge waves exist, indicating that resonant triads may be observable and implicated in late, large-amplitude tsunami arrivals.
Centrality in earthquake multiplex networks
NASA Astrophysics Data System (ADS)
Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.
2018-06-01
Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.
A crack-like rupture model for the 19 September 1985 Michoacan, Mexico, earthquake
NASA Astrophysics Data System (ADS)
Ruppert, Stanley D.; Yomogida, Kiyoshi
1992-09-01
Evidence supporting a smooth crack-like rupture process of the Michoacan earthquake of 1985 is obtained from a major earthquake for the first time. Digital strong motion data from three stations (Caleta de Campos, La Villita, and La Union), recording near-field radiation from the fault, show unusually simple ramped displacements and permanent offsets previously only seen in theoretical models. The recording of low frequency (0 to 1 Hz) near-field waves together with the apparently smooth rupture favors a crack-like model to a step or Haskell-type dislocation model under the constraint of the slip distribution obtained by previous studies. A crack-like rupture, characterized by an approximated dynamic slip function and systematic decrease in slip duration away from the point of rupture nucleation, produces the best fit to the simple ramped displacements observed. Spatially varying rupture duration controls several important aspects of the synthetic seismograms, including the variation in displacement rise times between components of motion observed at Caleta de Campos. Ground motion observed at Caleta de Campos can be explained remarkably well with a smoothly propagating crack model. However, data from La Villita and La Union suggest a more complex rupture process than the simple crack-like model for the south-eastern portion of the fault.
Volcano-earthquake interaction at Mauna Loa volcano, Hawaii
NASA Astrophysics Data System (ADS)
Walter, Thomas R.; Amelung, Falk
2006-05-01
The activity at Mauna Loa volcano, Hawaii, is characterized by eruptive fissures that propagate into the Southwest Rift Zone (SWRZ) or into the Northeast Rift Zone (NERZ) and by large earthquakes at the basal decollement fault. In this paper we examine the historic eruption and earthquake catalogues, and we test the hypothesis that the events are interconnected in time and space. Earthquakes in the Kaoiki area occur in sequence with eruptions from the NERZ, and earthquakes in the Kona and Hilea areas occur in sequence with eruptions from the SWRZ. Using three-dimensional numerical models, we demonstrate that elastic stress transfer can explain the observed volcano-earthquake interaction. We examine stress changes due to typical intrusions and earthquakes. We find that intrusions change the Coulomb failure stress along the decollement fault so that NERZ intrusions encourage Kaoiki earthquakes and SWRZ intrusions encourage Kona and Hilea earthquakes. On the other hand, earthquakes decompress the magma chamber and unclamp part of the Mauna Loa rift zone, i.e., Kaoiki earthquakes encourage NERZ intrusions, whereas Kona and Hilea earthquakes encourage SWRZ intrusions. We discuss how changes of the static stress field affect the occurrence of earthquakes as well as the occurrence, location, and volume of dikes and of associated eruptions and also the lava composition and fumarolic activity.